• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

De-platforming disinformation: conspiracy theories and their control

June 5, 2022

🔬 Research summary by Sarah P. Grant, a freelance writer who is devoted to covering the ethics of AI and advanced technologies. She also works as a content marketer for technology companies operating in the HRTech and EdTech spaces.

[Original paper by H. Innes & M. Innes]


Overview: Widespread COVID-19 conspiracies and political disinformation prompted Facebook (which now operates under the name Meta) to ramp up countermeasures in 2020. In this paper, crime and security researchers from Cardiff University evaluate the impacts of actions the company took against two prominent COVID-19 conspiracy theorists. Along with assessing the effectiveness of the interventions, the researchers explore how this mode of social control can produce unintended consequences.


Introduction

Accurate health information can mean the difference between life and death, making COVID-19 disinformation particularly problematic. In an attempt to disrupt the flow of pandemic-related conspiracy theories circulating widely on its platforms, Facebook has been employing its toughest punishment: deplatforming (the outright ban of accounts from a particular site).

But is deplatforming effective, and does it produce undesirable outcomes? Those are the central questions that researchers from the Crime and Security Research Institute at Cardiff University sought to answer in this paper on deplatforming as a mode of social control. 

For this paper, H. Innes and M. Innes conducted an empirical deep-dive into deplatforming interventions performed by Facebook in 2020 against two prominent COVID-19 conspiracy theorists: David Icke and Kate Shemirani. To determine whether these interventions produced unintended consequences, the researchers measured minion account activity and replatforming behaviours, which the paper positions as two new measurement concepts. The researchers conclude that in both cases, the deplatforming actions actually drew attention to these conspirators. While the interventions “may have some limited short-term effects,” the researchers argue, “there is little reason to suppose that over the medium-term they control the flow of disinformation.”

Key Insights

Facebook on the front line of social control

Along with assessing whether Facebook was successful in curbing disinformation produced by the two charismatic conspiracy theorists, the paper also investigates how Facebook organises deplatforming in general. The researchers describe how deplatforming is the company’s harshest sanction–an endpoint in an “escalatory enforcement dynamic” of other interventions like algorithm adaptations and demonetization. 

Deplatforming is not a formal sanction implemented by the state, and is typically enforced as an informal mode of social control where “private companies assume front-line responsibility for control of deviant behavior.”  The researchers state that, while the incentive is strong for individual companies “to get bad actors off their platforms,” this does not necessarily result in curbing problematic behaviour.

The researchers also briefly reference theoretical work that places the broader problem of disinformation within a new social ordering of reality, and the rise of a post-truth era.

An unintended consequence: The Streisand Effect

The researchers set the stage for their empirical findings by unpacking a phenomenon called “The Streisand Effect,” where censorship actually hardens “the ideological convictions of its followers.” Other studies have shown that Telegram was once a marginal platform, but the site has experienced significant growth in user numbers partially due to user migration resulting from the policing actions of social media companies. The problem of disinformation can intensify, they argue, when users are pushed onto platforms where posts are not moderated as often.

To determine whether deplatforming is effective and can produce unintended consequences, the researchers measured social media activity associated with two influential COVID-19 conspiracy theorists after they were deplatformed on Facebook. They sourced their data from CrowdTangle, Facebook’s public insights tool.

The paper describes how the first conspiracy theorist, David Icke, has been spreading disinformation since the 1990s, and arguably played a role in shaping the QAnon movement with his rhetoric. During the pandemic, he espoused multiple popular conspiracies, including those about 5G and vaccinations. Facebook removed his official page with 800,000 followers in April 2020, but seven days after the removal, his public Facebook mentions increased by 84%. Seven months after the removal, there were “64 active Facebook pages and 40 active Facebook groups using his name,” and many pages directed people to Icke content on other platforms.

The researchers note that the other prominent conspirator covered in this paper, Kate Shemirani, likely became influential during the COVID-19 pandemic because she had medical qualifications. She expressed anti-semitic and anti-vaccine views, and her profile of 54,000 followers was removed in September 2020. At first, the intervention impacted her connection with followers, but then the number of Facebook video shares increased in the following two months. The researchers observe that the deplatforming action likely “increased her resilience as a messenger with multiple alliances spread across multiple other platforms linking back to Facebook.”

Between the lines

This paper is significant because it goes beyond analysing the cause or content of conspiracy theories and examines the effectiveness of countermeasures.  While the researchers focus on Facebook, they do acknowledge that disinformation is a complex problem and that various forces are creating a “polluted media ecosystem.” 

Other researchers go further, however, pushing back more forcefully on the idea that social media is entirely to blame for the disinformation epidemic. In the book entitled Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, for example, a group of academics argue that long standing institutional, political, and cultural patterns are radicalising the right-wing media ecosystem in the US.  Therefore, further research into deplatforming effectiveness could be grounded in an acknowledgement that social media plays a key role as an accelerant, not as the sole cause, of disinformation.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

    Broadening the Algorithm Auditing Lens to Investigate Targeted Advertising

  • Towards User-Guided Actionable Recourse

    Towards User-Guided Actionable Recourse

  • Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

    Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • Regulatory Instruments for Fair Personalized Pricing

    Regulatory Instruments for Fair Personalized Pricing

  • Robust Distortion-free Watermarks for Language Models

    Robust Distortion-free Watermarks for Language Models

  • Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

    Who Funds Misinformation? A Systematic Analysis of the Ad-related Profit Routines of Fake News sites

  • On the Perception of Difficulty: Differences between Humans and AI

    On the Perception of Difficulty: Differences between Humans and AI

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • Designing Fiduciary Artificial Intelligence

    Designing Fiduciary Artificial Intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.