🔬 Research Summary by Kenneth Church, a researcher who works on natural language processing, information retrieval, artificial intelligence and machine learning.
[Original paper by Kenneth Church, Annika Schoene, John E. Ortega, Raman Chandrasekar and Valia Kordoni]
Overview: A survey of the literature suggests social media has created a Frankenstein Monster that is exploiting human weaknesses. We cannot put our phones down, even though we know it is bad for us (and society). Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation, given that it is so effective and so insanely profitable (at least in the short term).
Introduction
Trafficking in misinformation is insanely profitable. We should not blame consumers of misinformation for their gullibility or suppliers of misinformation (including adversaries) for taking advantage of the opportunities. Without the market makers creating a market for misinformation and fanning the flames, there would be much less toxicity.
There is a long tradition of prioritizing profits ahead of public health/safety/security. The term “Opium Wars” comes from a strongly worded editorial in 1840. The conservatives attempted to “own the libs” by linking them to drugs. Why risk business in tea and textiles so the libs could smuggle opium into China? Social media is an addictive drug like opium, tobacco, and gambling, with consequences for public health, public safety, and national security.
Key Insights
What happened, and was it our fault?
Machine learning and social media have been implicated in trouble around the world: Myanmar, Sri Lanka, opposition to vaccines, climate change denial, mass shootings, Gamergate, Pizzagate, QAnon, right-wing politics (MAGA, AfD), Charlottesville, Jan 6th, etc. Much has been written in the academic literature connecting the dots between social media addiction, misinformation, polarization, riots, cyberbullying, suicide, depression, eating disorders, and insane profits.
Much of the work in Computer Science focuses on what we call Risks 1.0 (bias and fairness). We are building classifiers that can detect toxicity. Journalists are accusing us of pivoting when they want to talk to us about Risks 2.0 (addictive, dangerous, and deadly), and we respond with a discussion of recent progress on Risks 1.0 (toxicity detection).
Root Causes
How does fake news spread? Our literature survey suggests several social media companies have been working over the years on machine learning algorithms for recommending content, producing a Frankenstein monster. These companies stumbled on remarkably effective uses of persuasive technology to exploit human weaknesses. Just as casinos take advantage of addicted gamblers, recommendation algorithms know that it is impossible for us to satisfy our cravings for likes. We cannot put our phones down, and stop taking dozens of dopamine hits every day, even though we know it is bad for us (and society). Maximizing engagement brings out the worst in people, with significant risks to public health, safety, and national security.
Moderation is an expensive non-solution
Facebook and YouTube have expensive cost centers that attempt to clean up the mess, but they cannot be expected to keep up with better-resourced profit centers that are pumping out toxic sludge as fast as they can.
Incentives
The problem is that trafficking in misinformation is so insanely profitable. We cannot expect social media companies to regulate themselves. Companies have an obligation to maximize shareholder value. It is easier for nonprofits like Wikipedia to address toxicity because nonprofits are not expected to be profitable.
Competition forces a race to the bottom, where everyone has to do the wrong thing. If one company decides to be generous and do the right thing, it will lose out to a less generous competitor.
Constructive suggestions
What can we do about this nightmare? We view the current chaos like the Wild West. Just as that lawlessness did not last long because it was bad for business, so too, in the long run, the current chaos will be displaced by more legitimate online businesses.
As for the short term, we are pleasantly surprised by so much pushback from many parties: governments, users, investors, content providers, the press, academics, consumer groups, advertisers, and employees. There must be a way to make it less insanely profitable to traffic in misinformation. Regulators should “follow the money” and “take away the punch bowl.” Regulation is taken very seriously in Europe.
Between the lines
Assuming that markets are efficient, rational, and sane, at least in the long term, then insane profits cannot continue for long. There are already hints that the short-term business case may falter at Twitter and Facebook. You know it must be bad for social media companies when The Late Show with Stephen Colbert makes jokes at their expense. Feels like jokes we used to hear just before “Ma Bell” was split up into a bunch of “Baby Bells.” In the long run, chaos is bad for business (and many other parties). We anticipate a sequel to “How the West Was Won” entitled “How the Web Was Won,” giving a whole new meaning to WWW.