• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

May 31, 2023

🔬 Research Summary by Kenneth Church, a researcher who works on natural language processing, information retrieval, artificial intelligence and machine learning.

[Original paper by Kenneth Church, Annika Schoene, John E. Ortega, Raman Chandrasekar and Valia Kordoni]


Overview: A survey of the literature suggests social media has created a Frankenstein Monster that is exploiting human weaknesses. We cannot put our phones down, even though we know it is bad for us (and society). Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation, given that it is so effective and so insanely profitable (at least in the short term).


Introduction

Trafficking in misinformation is insanely profitable.  We should not blame consumers of misinformation for their gullibility or suppliers of misinformation (including adversaries) for taking advantage of the opportunities. Without the market makers creating a market for misinformation and fanning the flames, there would be much less toxicity. 

There is a long tradition of prioritizing profits ahead of public health/safety/security.  The term “Opium Wars” comes from a strongly worded editorial in 1840. The conservatives attempted to “own the libs” by linking them to drugs.  Why risk business in tea and textiles so the libs could smuggle opium into China? Social media is an addictive drug like opium, tobacco, and gambling, with consequences for public health, public safety, and national security.  

Key Insights

What happened, and was it our fault?

Machine learning and social media have been implicated in trouble around the world: Myanmar, Sri Lanka, opposition to vaccines, climate change denial, mass shootings, Gamergate, Pizzagate, QAnon, right-wing politics (MAGA, AfD), Charlottesville, Jan 6th, etc.  Much has been written in the academic literature connecting the dots between social media addiction, misinformation, polarization, riots, cyberbullying, suicide, depression, eating disorders, and insane profits.

Much of the work in Computer Science focuses on what we call Risks 1.0 (bias and fairness).  We are building classifiers that can detect toxicity.  Journalists are accusing us of pivoting when they want to talk to us about Risks 2.0 (addictive, dangerous, and deadly), and we respond with a discussion of recent progress on Risks 1.0 (toxicity detection).

Root Causes

How does fake news spread? Our literature survey suggests several social media companies have been working over the years on machine learning algorithms for recommending content, producing a Frankenstein monster.  These companies stumbled on remarkably effective uses of persuasive technology to exploit human weaknesses.  Just as casinos take advantage of addicted gamblers, recommendation algorithms know that it is impossible for us to satisfy our cravings for likes. We cannot put our phones down, and stop taking dozens of dopamine hits every day, even though we know it is bad for us (and society). Maximizing engagement brings out the worst in people, with significant risks to public health, safety, and national security.

Moderation is an expensive non-solution 

Facebook and YouTube have expensive cost centers that attempt to clean up the mess, but they cannot be expected to keep up with better-resourced profit centers that are pumping out toxic sludge as fast as they can.

Incentives

The problem is that trafficking in misinformation is so insanely profitable.  We cannot expect social media companies to regulate themselves. Companies have an obligation to maximize shareholder value. It is easier for nonprofits like Wikipedia to address toxicity because nonprofits are not expected to be profitable.

Competition forces a race to the bottom, where everyone has to do the wrong thing. If one company decides to be generous and do the right thing, it will lose out to a less generous competitor.

Constructive suggestions

What can we do about this nightmare? We view the current chaos like the Wild West. Just as that lawlessness did not last long because it was bad for business, so too, in the long run, the current chaos will be displaced by more legitimate online businesses.

As for the short term, we are pleasantly surprised by so much pushback from many parties: governments, users, investors, content providers, the press, academics, consumer groups, advertisers, and employees. There must be a way to make it less insanely profitable to traffic in misinformation. Regulators should “follow the money” and “take away the punch bowl.” Regulation is taken very seriously in Europe.

Between the lines

Assuming that markets are efficient, rational, and sane, at least in the long term, then insane profits cannot continue for long. There are already hints that the short-term business case may falter at Twitter and Facebook.  You know it must be bad for social media companies when The Late Show with Stephen Colbert makes jokes at their expense. Feels like jokes we used to hear just before “Ma Bell” was split up into a bunch of “Baby Bells.” In the long run, chaos is bad for business (and many other parties). We anticipate a sequel to “How the West Was Won” entitled “How the Web Was Won,” giving a whole new meaning to WWW.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-making

    When Are Two Lists Better than One?: Benefits and Harms in Joint Decision-making

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Artificial Intelligence and Healthcare: From Sci-Fi to Reality

    Artificial Intelligence and Healthcare: From Sci-Fi to Reality

  • Measuring Fairness of Text Classifiers via Prediction Sensitivity

    Measuring Fairness of Text Classifiers via Prediction Sensitivity

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • Research summary: Working Algorithms: Software Automation and the Future of Work

    Research summary: Working Algorithms: Software Automation and the Future of Work

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.