• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse

June 17, 2020

Summary contributed by Brooke Criswell (@Brooke_Criswell). She’s pursuing a PhD. in media psychology, and has extensive experience in marketing & communications.

*Reference at the bottom


The researchers at the University of Washington analyzed Twitter activity on the #BlackLivesMatter movement and police-related shootings in the United States during 2016 to better understand how information campaigns manipulate the social media discussions taking place. They focused on publicly suspended accounts that were affiliated with the Internet Research Agency, a Russian organization. This organization supports full-time employees to do “professional propaganda” on social media (Arif, Stewart, & Starbird, 2018). 

Social media has become a platform for information operations, especially for foreign invaders, to alter the infrastructure and spread disinformation (Arif, Stewart, & Starbird, 2018). Information operations are used by the United States intelligence community that describes actions that disrupt information systems and streams of geopolitical adversary (Arif, Stewart, & Starbird, 2018). 

“Disinformation describes the intentional spread of false or inaccurate information meant to mislead others about the state of the world” (Arif, Stewart, & Starbird, 2018). 

Social media has become a breeding ground for disinformation because of the way systems and networks are created and nurtured online. For example, algorithms derive newsfeeds related to people’s preconceived ideas. People are also typically connected with those they already have similar thoughts with, which cause a homophily filter bubble (Arif, Stewart, & Starbird, 2018). These structural issues can contribute to the effectiveness of disinformation being spread. 

Not to mention, the ecosystem of social media is influenced by hot bots, fake news websites, conspiracy theorists, and trolls are influencing mainstream media, influential bloggers and ordinary individuals who are now all amplifying the propaganda in social media platforms (Arif, Stewart, & Starbird, 2018). 

Facebook has even acknowledged that their platform had been used for “information operations” to influence the United States Presidential election, in an April 2017 report (Arif, Stewart, & Starbird, 2018).  Other platforms such as Twitter and Reddit came forward to say their platform has also been involved in information operations by the Internet Research Agency known to be a Russian “troll farm.” 

With that in mind, they analyzed how the RU-IRA accounts participated in online discussions about the #BlackLivesMatter movement and shootings in the U.S. during 2016. 

There was a list released by Twitter of 2,752 RU-IRA affiliated troll accounts in November 2017. The researchers begin by analyzing the behavioral network ties to narrow down cases to conduct qualitative research. After they got the 29 accounts integrated into the information network, they conducted a bottom-up open coding on the digital traces left behind by these accounts such as tweets, profiles, linked content, and websites. 

The initial dataset included about 58.8 million tweets from December 31st, 2015, and October 5th, 2016. They used the open Twitter API. 

Some different terms they searched for were “gunshot, gunman, shooter, and shooting”. Then for the #BlackLivesMatters, they searched for “BlackLivesMatter, or “AllLivesMatter and “BlueLivesMatter.” 

They first conducted a network analysis to understand the 22,020 accounts promoting these issues. They then cross-referenced those accounts with the list released by Twitter and found that 96 RU-IRA accounts from Twitter’s list were present in the data, and the subset of the troll accounts tweeted at least once with the various hashtags. These accounts interacted with many accounts surrounding them, which potentially have a more significant impact on influence within the networks (Arif, Stewart, & Starbird, 2018).

After the network analysis was completed, they began the qualitative analysis with the 29 accounts. Those accounts produced 109 tweets, which were retweeted 1,934 times in their data collection (Arif, Stewart, & Starbird, 2018). 

They then analyzed three central units, including the profile data, tweets with a focus on original content including memes, and external websites, social platforms, and news articles, these accounts linked to “follow the person” (Arif, Stewart, & Starbird, 2018). 

A structural analysis was conducted between the 22,020 twitter accounts and the 58,695 retweets these accounts got from their content. They used a community detection algorithm to identify the clusters systematically. The clusters were best identified as a difference along political lines. One side of the summary bios was involved with the Democratic presidential candidate, and the Black Lives Matter campaign while the other community-supported Trump and the MAGA hashtag claiming Make America Great Again (Arif, Stewart, & Starbird, 2018). 

Their results suggested that information operations were occurring and that while some social media does bring us together when these platforms such as twitter are being targeted, there are accounts deliberately trying to pull people apart. It aligns with other literature claiming that the tactics used for disinformation are ideologically fluid and seek to exploit the social divides (Arif, Stewart, & Starbird, 2018). 

Many of the profiles analyzed were deceitful in pretending to be a certain kind of person, such as an African American that fit the stereotypical thinking. Another finding was that these accounts tied to Russia were often linked with their websites that undermine traditional media in favor of alternative media websites that are set up for supporting information operations (Arif, Stewart, & Starbird, 2018). 

These examples highlight how information operations can invoke content that is not always politically persuading their followers by true or false claims. However, they are affirming and representing personal experiences and shared beliefs that reconfirm what people already believe based on stereotypes that may or may not be accurate. These accounts blend into the communities they target, which help them become more persuasive socially and emotionally. 

This article of research opens the doors to understanding the mechanisms these information operations accounts to use to manipulate people and what their broader goals are in terms fo shaping online political discourse, primarily in United States (Arif, Stewart, & Starbird, 2018). 

Overall these accounts use fictitious account identities to reflect and shape social divisions and can undermine the trust in information in places such as the “mainstream media” (Arif, Stewart, & Starbird, 2018). Furthermore, because of their tactics used that resonate with the actual persons they are targeting, they are more successful because they understand how and why the people of that targeted community think the way they do and feed information they already believe in, which strengthens their beliefs.


Arif, A., Stewart, L. G., & Starbird, K. (2018). Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-27. doi:10.1145/3274289

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Fair Interpretable Representation Learning with Correction Vectors

    Fair Interpretable Representation Learning with Correction Vectors

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Top 5 takeaways from our conversation with I2AI on AI in different national contexts

    Top 5 takeaways from our conversation with I2AI on AI in different national contexts

  • DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

    DC-Check: A Data-Centric AI checklist to guide the development of reliable machine learning systems

  • Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

    Technology on the Margins: AI and Global Migration Management From a Human Rights Perspective (Resea...

  • SoK: The Gap Between Data Rights Ideals and Reality

    SoK: The Gap Between Data Rights Ideals and Reality

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

    Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

  • Private Training Set Inspection in MLaaS

    Private Training Set Inspection in MLaaS

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.