• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse

June 17, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


The researchers at the University of Washington analyzed Twitter activity on the #BlackLivesMatter movement and police-related shootings in the United States during 2016 to better understand how information campaigns manipulate the social media discussions taking place. They focused on publicly suspended accounts that were affiliated with the Internet Research Agency, a Russian organization. This organization supports full-time employees to do “professional propaganda” on social media (Arif, Stewart, & Starbird, 2018). 

Social media has become a platform for information operations, especially for foreign invaders, to alter the infrastructure and spread disinformation (Arif, Stewart, & Starbird, 2018). Information operations are used by the United States intelligence community that describes actions that disrupt information systems and streams of geopolitical adversary (Arif, Stewart, & Starbird, 2018). 

“Disinformation describes the intentional spread of false or inaccurate information meant to mislead others about the state of the world” (Arif, Stewart, & Starbird, 2018). 

Social media has become a breeding ground for disinformation because of the way systems and networks are created and nurtured online. For example, algorithms derive newsfeeds related to people’s preconceived ideas. People are also typically connected with those they already have similar thoughts with, which cause a homophily filter bubble (Arif, Stewart, & Starbird, 2018). These structural issues can contribute to the effectiveness of disinformation being spread. 

Not to mention, the ecosystem of social media is influenced by hot bots, fake news websites, conspiracy theorists, and trolls are influencing mainstream media, influential bloggers and ordinary individuals who are now all amplifying the propaganda in social media platforms (Arif, Stewart, & Starbird, 2018). 

Facebook has even acknowledged that their platform had been used for “information operations” to influence the United States Presidential election, in an April 2017 report (Arif, Stewart, & Starbird, 2018).  Other platforms such as Twitter and Reddit came forward to say their platform has also been involved in information operations by the Internet Research Agency known to be a Russian “troll farm.” 

With that in mind, they analyzed how the RU-IRA accounts participated in online discussions about the #BlackLivesMatter movement and shootings in the U.S. during 2016. 

There was a list released by Twitter of 2,752 RU-IRA affiliated troll accounts in November 2017. The researchers begin by analyzing the behavioral network ties to narrow down cases to conduct qualitative research. After they got the 29 accounts integrated into the information network, they conducted a bottom-up open coding on the digital traces left behind by these accounts such as tweets, profiles, linked content, and websites. 

The initial dataset included about 58.8 million tweets from December 31st, 2015, and October 5th, 2016. They used the open Twitter API. 

Some different terms they searched for were “gunshot, gunman, shooter, and shooting”. Then for the #BlackLivesMatters, they searched for “BlackLivesMatter, or “AllLivesMatter and “BlueLivesMatter.” 

They first conducted a network analysis to understand the 22,020 accounts promoting these issues. They then cross-referenced those accounts with the list released by Twitter and found that 96 RU-IRA accounts from Twitter’s list were present in the data, and the subset of the troll accounts tweeted at least once with the various hashtags. These accounts interacted with many accounts surrounding them, which potentially have a more significant impact on influence within the networks (Arif, Stewart, & Starbird, 2018).

After the network analysis was completed, they began the qualitative analysis with the 29 accounts. Those accounts produced 109 tweets, which were retweeted 1,934 times in their data collection (Arif, Stewart, & Starbird, 2018). 

They then analyzed three central units, including the profile data, tweets with a focus on original content including memes, and external websites, social platforms, and news articles, these accounts linked to “follow the person” (Arif, Stewart, & Starbird, 2018). 

A structural analysis was conducted between the 22,020 twitter accounts and the 58,695 retweets these accounts got from their content. They used a community detection algorithm to identify the clusters systematically. The clusters were best identified as a difference along political lines. One side of the summary bios was involved with the Democratic presidential candidate, and the Black Lives Matter campaign while the other community-supported Trump and the MAGA hashtag claiming Make America Great Again (Arif, Stewart, & Starbird, 2018). 

Their results suggested that information operations were occurring and that while some social media does bring us together when these platforms such as twitter are being targeted, there are accounts deliberately trying to pull people apart. It aligns with other literature claiming that the tactics used for disinformation are ideologically fluid and seek to exploit the social divides (Arif, Stewart, & Starbird, 2018). 

Many of the profiles analyzed were deceitful in pretending to be a certain kind of person, such as an African American that fit the stereotypical thinking. Another finding was that these accounts tied to Russia were often linked with their websites that undermine traditional media in favor of alternative media websites that are set up for supporting information operations (Arif, Stewart, & Starbird, 2018). 

These examples highlight how information operations can invoke content that is not always politically persuading their followers by true or false claims. However, they are affirming and representing personal experiences and shared beliefs that reconfirm what people already believe based on stereotypes that may or may not be accurate. These accounts blend into the communities they target, which help them become more persuasive socially and emotionally. 

This article of research opens the doors to understanding the mechanisms these information operations accounts to use to manipulate people and what their broader goals are in terms fo shaping online political discourse, primarily in United States (Arif, Stewart, & Starbird, 2018). 

Overall these accounts use fictitious account identities to reflect and shape social divisions and can undermine the trust in information in places such as the “mainstream media” (Arif, Stewart, & Starbird, 2018). Furthermore, because of their tactics used that resonate with the actual persons they are targeting, they are more successful because they understand how and why the people of that targeted community think the way they do and feed information they already believe in, which strengthens their beliefs.


Arif, A., Stewart, L. G., & Starbird, K. (2018). Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-27. doi:10.1145/3274289

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

    Theorizing Femininity in AI: a Framework for Undoing Technology’s Gender Troubles (Research Summary)

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

  • Positive AI Economic Futures: Insight Report

    Positive AI Economic Futures: Insight Report

  • Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

    Judging the algorithm: A case study on the risk assessment tool for gender-based violence implemente...

  • An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

    An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • Research summary: Mass Incarceration and the Future of AI

    Research summary: Mass Incarceration and the Future of AI

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.