• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

May 31, 2023

🔬 Research Summary by Kenneth Church, a researcher who works on natural language processing, information retrieval, artificial intelligence and machine learning.

[Original paper by Kenneth Church, Annika Schoene, John E. Ortega, Raman Chandrasekar and Valia Kordoni]


Overview: A survey of the literature suggests social media has created a Frankenstein Monster that is exploiting human weaknesses. We cannot put our phones down, even though we know it is bad for us (and society). Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation, given that it is so effective and so insanely profitable (at least in the short term).


Introduction

Trafficking in misinformation is insanely profitable.  We should not blame consumers of misinformation for their gullibility or suppliers of misinformation (including adversaries) for taking advantage of the opportunities. Without the market makers creating a market for misinformation and fanning the flames, there would be much less toxicity. 

There is a long tradition of prioritizing profits ahead of public health/safety/security.  The term “Opium Wars” comes from a strongly worded editorial in 1840. The conservatives attempted to “own the libs” by linking them to drugs.  Why risk business in tea and textiles so the libs could smuggle opium into China? Social media is an addictive drug like opium, tobacco, and gambling, with consequences for public health, public safety, and national security.  

Key Insights

What happened, and was it our fault?

Machine learning and social media have been implicated in trouble around the world: Myanmar, Sri Lanka, opposition to vaccines, climate change denial, mass shootings, Gamergate, Pizzagate, QAnon, right-wing politics (MAGA, AfD), Charlottesville, Jan 6th, etc.  Much has been written in the academic literature connecting the dots between social media addiction, misinformation, polarization, riots, cyberbullying, suicide, depression, eating disorders, and insane profits.

Much of the work in Computer Science focuses on what we call Risks 1.0 (bias and fairness).  We are building classifiers that can detect toxicity.  Journalists are accusing us of pivoting when they want to talk to us about Risks 2.0 (addictive, dangerous, and deadly), and we respond with a discussion of recent progress on Risks 1.0 (toxicity detection).

Root Causes

How does fake news spread? Our literature survey suggests several social media companies have been working over the years on machine learning algorithms for recommending content, producing a Frankenstein monster.  These companies stumbled on remarkably effective uses of persuasive technology to exploit human weaknesses.  Just as casinos take advantage of addicted gamblers, recommendation algorithms know that it is impossible for us to satisfy our cravings for likes. We cannot put our phones down, and stop taking dozens of dopamine hits every day, even though we know it is bad for us (and society). Maximizing engagement brings out the worst in people, with significant risks to public health, safety, and national security.

Moderation is an expensive non-solution 

Facebook and YouTube have expensive cost centers that attempt to clean up the mess, but they cannot be expected to keep up with better-resourced profit centers that are pumping out toxic sludge as fast as they can.

Incentives

The problem is that trafficking in misinformation is so insanely profitable.  We cannot expect social media companies to regulate themselves. Companies have an obligation to maximize shareholder value. It is easier for nonprofits like Wikipedia to address toxicity because nonprofits are not expected to be profitable.

Competition forces a race to the bottom, where everyone has to do the wrong thing. If one company decides to be generous and do the right thing, it will lose out to a less generous competitor.

Constructive suggestions

What can we do about this nightmare? We view the current chaos like the Wild West. Just as that lawlessness did not last long because it was bad for business, so too, in the long run, the current chaos will be displaced by more legitimate online businesses.

As for the short term, we are pleasantly surprised by so much pushback from many parties: governments, users, investors, content providers, the press, academics, consumer groups, advertisers, and employees. There must be a way to make it less insanely profitable to traffic in misinformation. Regulators should “follow the money” and “take away the punch bowl.” Regulation is taken very seriously in Europe.

Between the lines

Assuming that markets are efficient, rational, and sane, at least in the long term, then insane profits cannot continue for long. There are already hints that the short-term business case may falter at Twitter and Facebook.  You know it must be bad for social media companies when The Late Show with Stephen Colbert makes jokes at their expense. Feels like jokes we used to hear just before “Ma Bell” was split up into a bunch of “Baby Bells.” In the long run, chaos is bad for business (and many other parties). We anticipate a sequel to “How the West Was Won” entitled “How the Web Was Won,” giving a whole new meaning to WWW.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Confidence-Building Measures for Artificial Intelligence

    Confidence-Building Measures for Artificial Intelligence

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • Research summary: Designing for Human Rights in AI

    Research summary: Designing for Human Rights in AI

  • The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

    The Challenge of Understanding What Users Want: Inconsistent Preferences and Engagement Optimization

  • Machines as teammates: A research agenda on AI in team collaboration

    Machines as teammates: A research agenda on AI in team collaboration

  • Anthropomorphism and the Social Robot

    Anthropomorphism and the Social Robot

  • LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

    LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games

  • Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

    Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.