🔬 Research summary by Sarah P. Grant, a freelance writer who is devoted to covering the ethics of AI and advanced technologies. She also works as a content marketer for technology companies operating in the HRTech and EdTech spaces.
[Original paper by H. Innes & M. Innes]
Overview: Widespread COVID-19 conspiracies and political disinformation prompted Facebook (which now operates under the name Meta) to ramp up countermeasures in 2020. In this paper, crime and security researchers from Cardiff University evaluate the impacts of actions the company took against two prominent COVID-19 conspiracy theorists. Along with assessing the effectiveness of the interventions, the researchers explore how this mode of social control can produce unintended consequences.
Introduction
Accurate health information can mean the difference between life and death, making COVID-19 disinformation particularly problematic. In an attempt to disrupt the flow of pandemic-related conspiracy theories circulating widely on its platforms, Facebook has been employing its toughest punishment: deplatforming (the outright ban of accounts from a particular site).
But is deplatforming effective, and does it produce undesirable outcomes? Those are the central questions that researchers from the Crime and Security Research Institute at Cardiff University sought to answer in this paper on deplatforming as a mode of social control.
For this paper, H. Innes and M. Innes conducted an empirical deep-dive into deplatforming interventions performed by Facebook in 2020 against two prominent COVID-19 conspiracy theorists: David Icke and Kate Shemirani. To determine whether these interventions produced unintended consequences, the researchers measured minion account activity and replatforming behaviours, which the paper positions as two new measurement concepts. The researchers conclude that in both cases, the deplatforming actions actually drew attention to these conspirators. While the interventions “may have some limited short-term effects,” the researchers argue, “there is little reason to suppose that over the medium-term they control the flow of disinformation.”
Key Insights
Facebook on the front line of social control
Along with assessing whether Facebook was successful in curbing disinformation produced by the two charismatic conspiracy theorists, the paper also investigates how Facebook organises deplatforming in general. The researchers describe how deplatforming is the company’s harshest sanction–an endpoint in an “escalatory enforcement dynamic” of other interventions like algorithm adaptations and demonetization.
Deplatforming is not a formal sanction implemented by the state, and is typically enforced as an informal mode of social control where “private companies assume front-line responsibility for control of deviant behavior.” The researchers state that, while the incentive is strong for individual companies “to get bad actors off their platforms,” this does not necessarily result in curbing problematic behaviour.
The researchers also briefly reference theoretical work that places the broader problem of disinformation within a new social ordering of reality, and the rise of a post-truth era.
An unintended consequence: The Streisand Effect
The researchers set the stage for their empirical findings by unpacking a phenomenon called “The Streisand Effect,” where censorship actually hardens “the ideological convictions of its followers.” Other studies have shown that Telegram was once a marginal platform, but the site has experienced significant growth in user numbers partially due to user migration resulting from the policing actions of social media companies. The problem of disinformation can intensify, they argue, when users are pushed onto platforms where posts are not moderated as often.
To determine whether deplatforming is effective and can produce unintended consequences, the researchers measured social media activity associated with two influential COVID-19 conspiracy theorists after they were deplatformed on Facebook. They sourced their data from CrowdTangle, Facebook’s public insights tool.
The paper describes how the first conspiracy theorist, David Icke, has been spreading disinformation since the 1990s, and arguably played a role in shaping the QAnon movement with his rhetoric. During the pandemic, he espoused multiple popular conspiracies, including those about 5G and vaccinations. Facebook removed his official page with 800,000 followers in April 2020, but seven days after the removal, his public Facebook mentions increased by 84%. Seven months after the removal, there were “64 active Facebook pages and 40 active Facebook groups using his name,” and many pages directed people to Icke content on other platforms.
The researchers note that the other prominent conspirator covered in this paper, Kate Shemirani, likely became influential during the COVID-19 pandemic because she had medical qualifications. She expressed anti-semitic and anti-vaccine views, and her profile of 54,000 followers was removed in September 2020. At first, the intervention impacted her connection with followers, but then the number of Facebook video shares increased in the following two months. The researchers observe that the deplatforming action likely “increased her resilience as a messenger with multiple alliances spread across multiple other platforms linking back to Facebook.”
Between the lines
This paper is significant because it goes beyond analysing the cause or content of conspiracy theories and examines the effectiveness of countermeasures. While the researchers focus on Facebook, they do acknowledge that disinformation is a complex problem and that various forces are creating a “polluted media ecosystem.”Â
Other researchers go further, however, pushing back more forcefully on the idea that social media is entirely to blame for the disinformation epidemic. In the book entitled Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics, for example, a group of academics argue that long standing institutional, political, and cultural patterns are radicalising the right-wing media ecosystem in the US. Therefore, further research into deplatforming effectiveness could be grounded in an acknowledgement that social media plays a key role as an accelerant, not as the sole cause, of disinformation.