• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

De-platforming disinformation: conspiracy theories and their control

June 5, 2022

🔬 Research summary by Sarah P. Grant, a freelance writer who is devoted to covering the ethics of AI and advanced technologies. She also works as a content marketer for technology companies operating in the HRTech and EdTech spaces.

[Original paper by H. Innes & M. Innes]


Overview: Widespread COVID-19 conspiracies and political disinformation prompted Facebook (which now operates under the name Meta) to ramp up countermeasures in 2020. In this paper, crime and security researchers from Cardiff University evaluate the impacts of actions the company took against two prominent COVID-19 conspiracy theorists. Along with assessing the effectiveness of the interventions, the researchers explore how this mode of social control can produce unintended consequences.


Introduction

Accurate health information can mean the difference between life and death, making COVID-19 disinformation particularly problematic. In an attempt to disrupt the flow of pandemic-related conspiracy theories circulating widely on its platforms, Facebook has been employing its toughest punishment: deplatforming (the outright ban of accounts from a particular site).

But is deplatforming effective, and does it produce undesirable outcomes? Those are the central questions that researchers from the Crime and Security Research Institute at Cardiff University sought to answer in this paper on deplatforming as a mode of social control. 

For this paper, H. Innes and M. Innes conducted an empirical deep-dive into deplatforming interventions performed by Facebook in 2020 against two prominent COVID-19 conspiracy theorists: David Icke and Kate Shemirani. To determine whether these interventions produced unintended consequences, the researchers measured minion account activity and replatforming behaviours, which the paper positions as two new measurement concepts. The researchers conclude that in both cases, the deplatforming actions actually drew attention to these conspirators. While the interventions “may have some limited short-term effects,” the researchers argue, “there is little reason to suppose that over the medium-term they control the flow of disinformation.”

Key Insights

Facebook on the front line of social control

Along with assessing whether Facebook was successful in curbing disinformation produced by the two charismatic conspiracy theorists, the paper also investigates how Facebook organises deplatforming in general. The researchers describe how deplatforming is the company’s harshest sanction–an endpoint in an “escalatory enforcement dynamic” of other interventions like algorithm adaptations and demonetization. 

Deplatforming is not a formal sanction implemented by the state, and is typically enforced as an informal mode of social control where “private companies assume front-line responsibility for control of deviant behavior.”  The researchers state that, while the incentive is strong for individual companies “to get bad actors off their platforms,” this does not necessarily result in curbing problematic behaviour.

The researchers also briefly reference theoretical work that places the broader problem of disinformation within a new social ordering of reality, and the rise of a post-truth era.

An unintended consequence: The Streisand Effect

The researchers set the stage for their empirical findings by unpacking a phenomenon called “The Streisand Effect,” where censorship actually hardens “the ideological convictions of its followers.” Other studies have shown that Telegram was once a marginal platform, but the site has experienced significant growth in user numbers partially due to user migration resulting from the policing actions of social media companies. The problem of disinformation can intensify, they argue, when users are pushed onto platforms where posts are not moderated as often.

To determine whether deplatforming is effective and can produce unintended consequences, the researchers measured social media activity associated with two influential COVID-19 conspiracy theorists after they were deplatformed on Facebook. They sourced their data from CrowdTangle, Facebook’s public insights tool.

The paper describes how the first conspiracy theorist, David Icke, has been spreading disinformation since the 1990s, and arguably played a role in shaping the QAnon movement with his rhetoric. During the pandemic, he espoused multiple popular conspiracies, including those about 5G and vaccinations. Facebook removed his official page with 800,000 followers in April 2020, but seven days after the removal, his public Facebook mentions increased by 84%. Seven months after the removal, there were “64 active Facebook pages and 40 active Facebook groups using his name,” and many pages directed people to Icke content on other platforms.

The researchers note that the other prominent conspirator covered in this paper, Kate Shemirani, likely became influential during the COVID-19 pandemic because she had medical qualifications. She expressed anti-semitic and anti-vaccine views, and her profile of 54,000 followers was removed in September 2020. At first, the intervention impacted her connection with followers, but then the number of Facebook video shares increased in the following two months. The researchers observe that the deplatforming action likely “increased her resilience as a messenger with multiple alliances spread across multiple other platforms linking back to Facebook.”

Between the lines

This paper is significant because it goes beyond analysing the cause or content of conspiracy theories and examines the effectiveness of countermeasures.  While the researchers focus on Facebook, they do acknowledge that disinformation is a complex problem and that various forces are creating a “polluted media ecosystem.” 

Other researchers go further, however, pushing back more forcefully on the idea that social media is entirely to blame for the disinformation epidemic. In the book entitled Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, for example, a group of academics argue that long standing institutional, political, and cultural patterns are radicalising the right-wing media ecosystem in the US.  Therefore, further research into deplatforming effectiveness could be grounded in an acknowledgement that social media plays a key role as an accelerant, not as the sole cause, of disinformation.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

  • The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • Race and AI: the Diversity Dilemma

    Race and AI: the Diversity Dilemma

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 2)

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

    AI Bias in Healthcare: Using ImpactPro as a Case Study for Healthcare Practitioners’ Duties to Engag...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.