• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: Challenging Truth and Trust: A Global Inventory of Organized Social Media Manipulation

July 27, 2020

Summary contributed by Samuel Curtis, a Schwarzman Scholar and member of our learning community

*Authors of original paper & link at the bottom


Mini-summary: As social media platforms not only allow political actors to reach massive audiences, but also to fine-tune target audiences by location or demographic characteristics, they are becoming increasingly popular domains to carry out political agendas. Governments across the world—democratic and authoritarian alike—are expanding the capacity and sophistication of their “cyber troops” operations to capitalize on this medium of communication. In this report, Samantha Bradshaw and Philip N. Howard document the characteristics of 48 countries’ computational propaganda campaigns. While the size, funding, and coordination capacities of each country’s online operations vary, one thing remains clear, regardless of location: social media platforms face an increased risk of artificial amplification, content suppression, and media manipulation.

Full summary:

In short time, social media platforms’ roles have expanded from their nascent stages—places for users to connect, share entertainment, discuss popular culture, and stay in touch with each other’s day-to-day lives—to fields of operations for large-scale political and ideological warfare. On these online theaters, “cyber troops,” carry out missions to manipulate public opinion for political purposes by disseminating and amplifying “computational propaganda” (automation, algorithms and big-data analytics to manipulate public life). While many readers may be familiar with the large, robust cyber operations based in Russia, China, US, or North Korea, this 2018 report by Samantha Bradshaw and Philip N. Howard at the Oxford Internet Institute illuminates formally-organized social media campaigns that are developing across the world, in countries large and small, rich and poor, authoritarian and democratic alike.

Bradshaw and Howard point out that in the past, governments relied on “blunt instruments” to block or filter information, but that modern social media platforms allow for more precise information control, as they have the capabilities of reaching large numbers of people while simultaneously allowing for micro-targeting of people based on location or demographic traits. This versatility is precisely what has made social media platforms suitable tools to shape discourse and nudge public opinion. The value in being able to control discourse online is evidenced by the growth in coordinated attempts to influence public opinion documented in 48 countries in this report, compared to the previous year’s 28 countries (but authors do note that their data may not be comprehensive).

While cyber troops function across all sorts of governments, the authors point out that their roles are not one and the same across space. In emerging and Western democracies, political bots are being used to poison the information environment, polarize voting constituencies, promote distrust, and undermine democratic processes. In authoritarian regimes, governing parties use computational propaganda as just one tool in a portfolio of tactics to shape the narrative of the ruling party, stomp out counter-narratives, and subvert elections.

Computational propaganda operations can be targeted at both foreign and domestic audiences, and conducted by government agencies, politicians and parties, private contractors, civil society organizations, or citizens and influencers. They may deploy a number of “valence strategies” (characterizing attractiveness or averseness of content), by spreading pro-government or -party propaganda, attacking the opposition or mounting smear campaigns, or diverting conversations or criticism away from important issues. Often, cyber troops conduct operations through fake accounts, which may be automated accounts, human-controlled, or hybrid/cyborg accounts—in which operators combine automation with elements of human curation, making them particularly difficult to identify and moderate.

Cyber troops also employ a suite of communication strategies: they may amplify messages by creating content, posting forum comments, replying to genuine or artificial users, suppress other content, by launching targeted campaigns to falsely mass-report legitimate content or users, so that platforms are temporarily or permanently removed from the site. The authors comment that, logically, automated accounts can be found on platforms that make automation easy, namely, Twitter, but that cyber campaigns take place across all common forms of social media. They also note that â…• of the countries had evidence of disinformation campaigns operating over chatting applications (WhatsApp, WeChat, Telegram, etc.), many of which are located in the Global South, where these applications are prevalently used, and large public group chats are widespread.

This report also shares the size, resources, status (permanent or temporary), level of coordination, and capacity of countries’ cyber troop capacity. The size of cyber troop teams can range from dozens (in countries such as Argentina or Kyrgyzstan), thousands or tens of thousands (UK and Ukraine, respectively) or even millions (China). Countries differ in the budgets they allocate towards computational propaganda, the degree to which their teams coordinate with other firms and actors, and how often they operate, be it full-time and year-round, or just around critical dates or occasions, like elections.

The report concludes by calling democracies to take action by formulating guidelines to discourage bad actors from exploiting computational propaganda: “To start to address these challenges [outlined in the report], we need to develop stronger rules and norms for the use of social media, big data and new information technologies during elections.” Notably, the terms “rules and norms” leave ambiguity with respect to those who should be developing, implementing, and enforcing said reforms: social media platforms or governments? This was likely intentional, as the conversation around who should regulate speech in democracies warrants a paper in its own right.


Original paper by Samantha Bradshaw and Philip N. Howard (University of Oxford): https://comprop.oii.ox.ac.uk/research/cybertroops2018/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

    The Chinese Approach to AI: An Analysis of Policy, Ethics, and Regulation

  • Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

    Do Large GPT Models Discover Moral Dimensions in Language Representations? A Topological Study Of Se...

  • I Don't Want Someone to Watch Me While I'm Working: Gendered Views of Facial Recognition Technolog...

    "I Don't Want Someone to Watch Me While I'm Working": Gendered Views of Facial Recognition Technolog...

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Research summary: Working Algorithms: Software Automation and the Future of Work

    Research summary: Working Algorithms: Software Automation and the Future of Work

  • Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

    Language (Technology) is Power: A Critical Survey of “Bias” in NLP (Research summary)

  • Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

    Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

  • A collection of principles for guiding and evaluating large language models

    A collection of principles for guiding and evaluating large language models

  • The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

    The Unequal Opportunities of Large Language Models: Revealing Demographic Bias through Job Recommend...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.