• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Troops, Trolls and Troublemakers: A Global Inventory of Organized Social Media Manipulation

August 26, 2020

Summary contributed by Nga Than, a Ph.D student in the Sociology program at City University of New York – The Graduate Center.

Link to full paper + authors listed at the bottom.


Mini-summary: Online media manipulation has become a global phenomenon. Bradshaw and Howard examine this emerging phenomenon by focusing on the “cyber troops,” or organized governmental and political actors who manipulate public opinion via social media. This report provides detailed accounts of such groups across 28 countries. The authors investigate the types of messages, valances, and communications strategies that cyber troops use. Furthermore, they also compare their organizational forms, resources and capacities. The authors find that organized social media manipulation is a pervasive and global phenomenon. Some organizations target domestic populations, while others try to influence public opinion of foreign populations. Authoritarian regimes tend to have organized social media manipulation campaigns that target their domestic population. In democratic regimes, cyber troops have campaigns that target foreign publics, while political-party-supported campaigns target domestic voters. Overtime, the mode for organizing cyber troops went from military operation to private-for profit communication firms that work with the government.

Full summary:

Social media has played an increasingly important role in shaping public life, and public discussions. It helps form public opinion and serve as a place of information acquisition across the globe. Governments, and political actors increasingly have taken advantage of these communication platforms to their own advantage. They spend an increasing amount of financial resources to employ people to generate content, influence public opinion, and engage with domestic and foreign audiences. Bradshaw and Howard gather and create a unique dataset of organized media manipulation organizations to understand this global trend. 

The authors start out by defining the term: “cyber troops,” which refer to “government, military or political‐party teams committed to manipulating public opinion over social media.” The authors maintain that these groups play an increasing role in shaping public opinion. Then, they describe the process of gathering information to construct a unique dataset that they created to study those organizations to analyze the size, scale and extent to which different kinds of political regimes deploy cyber troops to influence and manipulate the public online. The authors rely on mainly news media sources written in English to find information such as budgets, personnel, organizational behavior and communication strategies. They further corroborate and supplement this information by consulting countries experts, and with reports from research institutes, civil society organizations. 

The authors found that cyber troops adopt a wide range of strategies, tools, and techniques for social media manipulation. These strategies include commenting on social media posts to engage with citizens, targeting individuals, using both real and fake social media accounts, and bots to spread propaganda and pro-government messages, as well as creating original content. Messages to users range from positive, to harassing and verbal abuse, to neutral language to distract public attention from important issues. Individual users are targeted to silence political dissent. This method is considered most harmful to targeted individuals because they often receive real life threats and suffer reputational damage. Cyber troops create original content such as videos, blog posts under online aliases. 

Cyber troops have a wide range of organization forms, structures, and capacity of cyber troops. The authors observe that some governments have their own in-house teams, while others outsource these activities to private contractors, and sometimes galvanize volunteers, and hire private citizens to spread political messages on the Internet.

In some countries, organized media manipulation is done through a small team, while in others a large network of government employees is involved. A notable example is China, which has more than 2 million individuals working to promote the party ideology. The research team also found that these different groups have different operating budgets, yet they encountered the problem of incomplete information because such information is not readily available. In authoritarian regimes, governments tend to provide funding for these activities, while in democratic regimes, political parties tend to be the main drivers of organized social media manipulation. 

Bradshaw and Howard show that cyber troops have heterogeneity in terms of organizational structure. Such organizations could have five different types of structure: (1) a clear hierarchy and reporting structure, (2) content review by superiors; (3) strong coordination across agencies or team (4) weak coordination across agencies or teams (5) liminal teams. Cyber troops also engage in capacity building activities from training staff to improve skills and abilities associated with producing and disseminating propaganda to providing rewards or incentives for high-performing individuals to investing in research and development projects. 

This cross-country comparative research highlights the heterogeneous nature of cyber troops’ activities across the world. They are increasing in size, scope, and organizational resources and capacity. This paper has important implications for researchers, civil society organizations, and private citizens to question how their online activities are shaped and influenced by government-funded groups. Furthermore, the paper raises important questions about the social media environment where government cyber operations could operate, shape public opinion, and sometimes divert public attention from important issues.


Original paper by Samantha Bradshaw, Philip N. Howard: https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/07/Troops-Trolls-and-Troublemakers.pdf

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary: Snapshot Series: Facial Recognition Technology

    Research summary: Snapshot Series: Facial Recognition Technology

  • Rise of the machines: Prof Stuart Russell on the promises and perils of AI

    Rise of the machines: Prof Stuart Russell on the promises and perils of AI

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

  • Reliabilism and the Testimony of Robots (Research Summary)

    Reliabilism and the Testimony of Robots (Research Summary)

  • Is AI Greening Global Supply Chains?

    Is AI Greening Global Supply Chains?

  • Generative AI-Driven Storytelling: A New Era for Marketing

    Generative AI-Driven Storytelling: A New Era for Marketing

  • Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

    Can Large Language Models Provide Security & Privacy Advice? Measuring the Ability of LLMs to Re...

  • A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

    A Taxonomy of Foundation Model based Systems for Responsible-AI-by-Design

  • Research summary:  Learning to Complement Humans

    Research summary: Learning to Complement Humans

  • Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

    Evolution in Age-Verification Applications: Can AI Open Some New Horizons?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.