• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

June 24, 2023

🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Peter Mantello, Tung Manh Ho, and Lena Podoletz]


Overview: Conversational AI bots are now augmenting and substituting human efforts across various fields, including advertising, finance, mental health counseling, dating, and wellness. This paper suggests that even Jihadis and neo-Nazis may not be safe from the AI job takeover. As conversational bots continue to evolve, the authors warn that they have become a vital radicalization and recruitment tool for extremist organizations.     


Introduction

Following the aftermath of 9/11, a now-defunct chatbot was launched as a cost-effective measure to handle the sudden interest and overwhelming traffic to the US military recruitment website. The chatbot, Sergeant Star, responded to various inquiries, including scholarship information, length of training, and career advancement. Since then, the authors argue, social bots have gained traction as a tool for strategic communication and indoctrination efforts by various extremist organizations, including jihadis and neo-Nazi groups. 

Key Insights

AI-Powered Propaganda: Manufacturing Consent for Collective Violence 

While propaganda has, historically, played a significant role in manufacturing consent for collective violence through various mediums such as prose, poetry, music, print media, radio, film, and television, the authors argue that its contemporary manifestation is “deeply hard-wired into modern networks of communication, often cloaked in unseen algorithms and automated agents.” These algorithmic agents-provocateurs can perform various disruptive functions. For instance, a “noise” bot can disrupt communication by flooding social media sites with thousands of adversarial posts, thereby diluting the impact of opposing content. However, deploying bots to drown out opposing political conversations or voices of dissent is not exclusive to extremists or authoritarian regimes. Indeed, a 2019 study by Oxford researchers Bradshaw & Howard found evidence of psycho-social manipulation campaigns by autonomous artificial agents taking place in 70 countries, an increase from 48 countries in 2018 and 28 countries in 2017. As the authors put it, “AI has allowed for a paradigm shift to occur in how political propaganda is constructed, negotiated, and ultimately legitimated.” In what follows, the authors investigate and analyze, using a biopolitical lens, the malicious use of social bots as an emerging psychological tool for extremist recruitment.  

Disembodied Connections and Affective Bonding

As social relationships move past the necessity of physical proximity, especially in the context of online radicalization, and as bots become increasingly intelligent conversationalists, people will encounter greater difficulty distinguishing between their digital human-human relationships and human-machine relationships. Thus, according to the authors, it is no surprise that bots have become an important tool for “affective bonding. ”  Affective bonding refers to the “psycho-physical phenomenon where an individual develops an emotional relationship with another person or group.” More significantly, affective bonding is critical to radicalization. 

On an organizational level, radical groups augment their strategic communications by incorporating bots into their operations, creating an illusion of a sophisticated organizational framework. This, in turn, encourages positive emotions (pride, joy, respect, and belonging) while equally generating negative feelings (hate, anger, and disgust) for out-groups. To illustrate, the authors use the example of the IS infomercial “Harvest of the Soldiers,” a bot-driven video depicting battlefield statistics of recent kills and victories, interspersed with the recurring figure of an Arab warrior on horseback sent weekly through private Telegram channels of IS supporters. 

Furthermore, extremist bots can perform various functions to identify susceptible targets and connect them to official human recruiters or other supporters. To further strengthen affective bonding at an organizational level, extremist bots can perform various other functions to identify susceptible targets and connect them with official human recruiters or other supporters. Just like phishing bots can harvest social media users’ information and then attempt to persuade them to add a new neighbor to their contact lists, extremist bots can connect potential recruits/users who share similar interests but have not engaged with each other. For instance, a friending bot might identify and match small groups of social media users whose profile an algorithm deems to have a “high likelihood of being radicalized.” In this way, the authors claim, the friending bot would help initiate secure communications among like-minded users who might receive private introductions to a recruiter. 

From Assistants to Recruiters: The Future of Online Radicalization

Similarly to mental health counseling bots like “Eliza” or “Woebot,” extremist bots can connect emotionally with newcomers by linking them with like-minded people and then handing them to a human recruiter for potential recruitment. So effective, in fact, that iN2 researchers suggest that the quick response and upbeat tone of social bots’ responses to applicant queries “heightened individual’s motivation to enlist.” They noted that the bots facilitate a stronger emotional connection in the recruitment process, making applicants feel more at ease and less suspicious than when dealing with a real person. As bots continue to evolve, the authors warn that software robots will assume a prominent role in online radicalization. In addition to greater reach and heightened levels of anonymity, safety, and security for those joining extremist groups, bots may also empower unaffiliated sympathizers to participate in acts of violence.  

Countering Radicalization: Progress and Limitations

Since 2015, social media companies and security agencies have taken the initiative, using both AI and human interlocutors, to manage the influx of jihadi bots in online radicalization.  For instance, in 2018, the European Union’s law enforcement agency, Europol, organized “Referral Action Day,” an annual campaign to detect, disrupt, and delete digital content and social media accounts supporting violent extremism. Nevertheless, the extremist bots’ speed, scalability, and resilience make the governments’ and companies’ counter-radicalization efforts challenging. Indeed, as the authors suggest, despite continuing efforts, social media platforms remain a haven for other extremist communications, outreach, recruiting, and other activities, as they are constantly adapting their bot warfare to outmaneuver and outwit algorithmic content moderation systems. 

Between the lines

While scholars in the mental health field have taken a keen interest in software robots’ empathetic dimensions, very little attention has been given to their increasing role in radicalization. Given that new technologies are rapidly blurring the lines between digital human-human relationships and human-machine relationships, we ought to seriously direct our efforts to prevent further bot attacks and develop outreach strategies to train social media users to recognize the malicious use of AI in information warfare.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • A roadmap toward empowering the labor force behind AI

    A roadmap toward empowering the labor force behind AI

  • Consequences of Recourse In Binary Classification

    Consequences of Recourse In Binary Classification

  • Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

    Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

  • The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

    The Robot Made Me Do It: Human–Robot Interaction and Risk-Taking Behavior (Research Summary)

  • The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

    The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

  • Augmented Datasheets for Speech Datasets and Ethical Decision-Making

    Augmented Datasheets for Speech Datasets and Ethical Decision-Making

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

    Research summary: Sponge Examples: Energy-Latency Attacks on Neural Networks

  • Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

    Embedding Ethical Principles into AI Predictive Tools for Migration Management in Humanitarian Actio...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.