• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

De-platforming disinformation: conspiracy theories and their control

June 5, 2022

🔬 Research summary by Sarah P. Grant, a freelance writer who is devoted to covering the ethics of AI and advanced technologies. She also works as a content marketer for technology companies operating in the HRTech and EdTech spaces.

[Original paper by H. Innes & M. Innes]


Overview: Widespread COVID-19 conspiracies and political disinformation prompted Facebook (which now operates under the name Meta) to ramp up countermeasures in 2020. In this paper, crime and security researchers from Cardiff University evaluate the impacts of actions the company took against two prominent COVID-19 conspiracy theorists. Along with assessing the effectiveness of the interventions, the researchers explore how this mode of social control can produce unintended consequences.


Introduction

Accurate health information can mean the difference between life and death, making COVID-19 disinformation particularly problematic. In an attempt to disrupt the flow of pandemic-related conspiracy theories circulating widely on its platforms, Facebook has been employing its toughest punishment: deplatforming (the outright ban of accounts from a particular site).

But is deplatforming effective, and does it produce undesirable outcomes? Those are the central questions that researchers from the Crime and Security Research Institute at Cardiff University sought to answer in this paper on deplatforming as a mode of social control. 

For this paper, H. Innes and M. Innes conducted an empirical deep-dive into deplatforming interventions performed by Facebook in 2020 against two prominent COVID-19 conspiracy theorists: David Icke and Kate Shemirani. To determine whether these interventions produced unintended consequences, the researchers measured minion account activity and replatforming behaviours, which the paper positions as two new measurement concepts. The researchers conclude that in both cases, the deplatforming actions actually drew attention to these conspirators. While the interventions “may have some limited short-term effects,” the researchers argue, “there is little reason to suppose that over the medium-term they control the flow of disinformation.”

Key Insights

Facebook on the front line of social control

Along with assessing whether Facebook was successful in curbing disinformation produced by the two charismatic conspiracy theorists, the paper also investigates how Facebook organises deplatforming in general. The researchers describe how deplatforming is the company’s harshest sanction–an endpoint in an “escalatory enforcement dynamic” of other interventions like algorithm adaptations and demonetization. 

Deplatforming is not a formal sanction implemented by the state, and is typically enforced as an informal mode of social control where “private companies assume front-line responsibility for control of deviant behavior.”  The researchers state that, while the incentive is strong for individual companies “to get bad actors off their platforms,” this does not necessarily result in curbing problematic behaviour.

The researchers also briefly reference theoretical work that places the broader problem of disinformation within a new social ordering of reality, and the rise of a post-truth era.

An unintended consequence: The Streisand Effect

The researchers set the stage for their empirical findings by unpacking a phenomenon called “The Streisand Effect,” where censorship actually hardens “the ideological convictions of its followers.” Other studies have shown that Telegram was once a marginal platform, but the site has experienced significant growth in user numbers partially due to user migration resulting from the policing actions of social media companies. The problem of disinformation can intensify, they argue, when users are pushed onto platforms where posts are not moderated as often.

To determine whether deplatforming is effective and can produce unintended consequences, the researchers measured social media activity associated with two influential COVID-19 conspiracy theorists after they were deplatformed on Facebook. They sourced their data from CrowdTangle, Facebook’s public insights tool.

The paper describes how the first conspiracy theorist, David Icke, has been spreading disinformation since the 1990s, and arguably played a role in shaping the QAnon movement with his rhetoric. During the pandemic, he espoused multiple popular conspiracies, including those about 5G and vaccinations. Facebook removed his official page with 800,000 followers in April 2020, but seven days after the removal, his public Facebook mentions increased by 84%. Seven months after the removal, there were “64 active Facebook pages and 40 active Facebook groups using his name,” and many pages directed people to Icke content on other platforms.

The researchers note that the other prominent conspirator covered in this paper, Kate Shemirani, likely became influential during the COVID-19 pandemic because she had medical qualifications. She expressed anti-semitic and anti-vaccine views, and her profile of 54,000 followers was removed in September 2020. At first, the intervention impacted her connection with followers, but then the number of Facebook video shares increased in the following two months. The researchers observe that the deplatforming action likely “increased her resilience as a messenger with multiple alliances spread across multiple other platforms linking back to Facebook.”

Between the lines

This paper is significant because it goes beyond analysing the cause or content of conspiracy theories and examines the effectiveness of countermeasures.  While the researchers focus on Facebook, they do acknowledge that disinformation is a complex problem and that various forces are creating a “polluted media ecosystem.” 

Other researchers go further, however, pushing back more forcefully on the idea that social media is entirely to blame for the disinformation epidemic. In the book entitled Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, for example, a group of academics argue that long standing institutional, political, and cultural patterns are radicalising the right-wing media ecosystem in the US.  Therefore, further research into deplatforming effectiveness could be grounded in an acknowledgement that social media plays a key role as an accelerant, not as the sole cause, of disinformation.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

    Problematic Machine Behavior: A Systematic Literature Review of Algorithm Audits

  • Representation Engineering: A Top-Down Approach to AI Transparency

    Representation Engineering: A Top-Down Approach to AI Transparency

  • Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

    Artificial Intelligence and the Privacy Paradox of Opportunity, Big Data and The Digital Universe

  • Representation and Imagination for Preventing AI Harms

    Representation and Imagination for Preventing AI Harms

  • How to Help People Understand AI

    How to Help People Understand AI

  • Code Work: Thinking with the System in Mexico

    Code Work: Thinking with the System in Mexico

  • The State of AI Ethics Report (Volume 5)

    The State of AI Ethics Report (Volume 5)

  • Fairness and Bias in Algorithmic Hiring

    Fairness and Bias in Algorithmic Hiring

  • A Hazard Analysis Framework for Code Synthesis Large Language Models

    A Hazard Analysis Framework for Code Synthesis Large Language Models

  • Toward an Ethics of AI Belief

    Toward an Ethics of AI Belief

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.