• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization

June 24, 2023

🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Peter Mantello, Tung Manh Ho, and Lena Podoletz]


Overview: Conversational AI bots are now augmenting and substituting human efforts across various fields, including advertising, finance, mental health counseling, dating, and wellness. This paper suggests that even Jihadis and neo-Nazis may not be safe from the AI job takeover. As conversational bots continue to evolve, the authors warn that they have become a vital radicalization and recruitment tool for extremist organizations.     


Introduction

Following the aftermath of 9/11, a now-defunct chatbot was launched as a cost-effective measure to handle the sudden interest and overwhelming traffic to the US military recruitment website. The chatbot, Sergeant Star, responded to various inquiries, including scholarship information, length of training, and career advancement. Since then, the authors argue, social bots have gained traction as a tool for strategic communication and indoctrination efforts by various extremist organizations, including jihadis and neo-Nazi groups. 

Key Insights

AI-Powered Propaganda: Manufacturing Consent for Collective Violence 

While propaganda has, historically, played a significant role in manufacturing consent for collective violence through various mediums such as prose, poetry, music, print media, radio, film, and television, the authors argue that its contemporary manifestation is “deeply hard-wired into modern networks of communication, often cloaked in unseen algorithms and automated agents.” These algorithmic agents-provocateurs can perform various disruptive functions. For instance, a “noise” bot can disrupt communication by flooding social media sites with thousands of adversarial posts, thereby diluting the impact of opposing content. However, deploying bots to drown out opposing political conversations or voices of dissent is not exclusive to extremists or authoritarian regimes. Indeed, a 2019 study by Oxford researchers Bradshaw & Howard found evidence of psycho-social manipulation campaigns by autonomous artificial agents taking place in 70 countries, an increase from 48 countries in 2018 and 28 countries in 2017. As the authors put it, “AI has allowed for a paradigm shift to occur in how political propaganda is constructed, negotiated, and ultimately legitimated.” In what follows, the authors investigate and analyze, using a biopolitical lens, the malicious use of social bots as an emerging psychological tool for extremist recruitment.  

Disembodied Connections and Affective Bonding

As social relationships move past the necessity of physical proximity, especially in the context of online radicalization, and as bots become increasingly intelligent conversationalists, people will encounter greater difficulty distinguishing between their digital human-human relationships and human-machine relationships. Thus, according to the authors, it is no surprise that bots have become an important tool for “affective bonding. ”  Affective bonding refers to the “psycho-physical phenomenon where an individual develops an emotional relationship with another person or group.” More significantly, affective bonding is critical to radicalization. 

On an organizational level, radical groups augment their strategic communications by incorporating bots into their operations, creating an illusion of a sophisticated organizational framework. This, in turn, encourages positive emotions (pride, joy, respect, and belonging) while equally generating negative feelings (hate, anger, and disgust) for out-groups. To illustrate, the authors use the example of the IS infomercial “Harvest of the Soldiers,” a bot-driven video depicting battlefield statistics of recent kills and victories, interspersed with the recurring figure of an Arab warrior on horseback sent weekly through private Telegram channels of IS supporters. 

Furthermore, extremist bots can perform various functions to identify susceptible targets and connect them to official human recruiters or other supporters. To further strengthen affective bonding at an organizational level, extremist bots can perform various other functions to identify susceptible targets and connect them with official human recruiters or other supporters. Just like phishing bots can harvest social media users’ information and then attempt to persuade them to add a new neighbor to their contact lists, extremist bots can connect potential recruits/users who share similar interests but have not engaged with each other. For instance, a friending bot might identify and match small groups of social media users whose profile an algorithm deems to have a “high likelihood of being radicalized.” In this way, the authors claim, the friending bot would help initiate secure communications among like-minded users who might receive private introductions to a recruiter. 

From Assistants to Recruiters: The Future of Online Radicalization

Similarly to mental health counseling bots like “Eliza” or “Woebot,” extremist bots can connect emotionally with newcomers by linking them with like-minded people and then handing them to a human recruiter for potential recruitment. So effective, in fact, that iN2 researchers suggest that the quick response and upbeat tone of social bots’ responses to applicant queries “heightened individual’s motivation to enlist.” They noted that the bots facilitate a stronger emotional connection in the recruitment process, making applicants feel more at ease and less suspicious than when dealing with a real person. As bots continue to evolve, the authors warn that software robots will assume a prominent role in online radicalization. In addition to greater reach and heightened levels of anonymity, safety, and security for those joining extremist groups, bots may also empower unaffiliated sympathizers to participate in acts of violence.  

Countering Radicalization: Progress and Limitations

Since 2015, social media companies and security agencies have taken the initiative, using both AI and human interlocutors, to manage the influx of jihadi bots in online radicalization.  For instance, in 2018, the European Union’s law enforcement agency, Europol, organized “Referral Action Day,” an annual campaign to detect, disrupt, and delete digital content and social media accounts supporting violent extremism. Nevertheless, the extremist bots’ speed, scalability, and resilience make the governments’ and companies’ counter-radicalization efforts challenging. Indeed, as the authors suggest, despite continuing efforts, social media platforms remain a haven for other extremist communications, outreach, recruiting, and other activities, as they are constantly adapting their bot warfare to outmaneuver and outwit algorithmic content moderation systems. 

Between the lines

While scholars in the mental health field have taken a keen interest in software robots’ empathetic dimensions, very little attention has been given to their increasing role in radicalization. Given that new technologies are rapidly blurring the lines between digital human-human relationships and human-machine relationships, we ought to seriously direct our efforts to prevent further bot attacks and develop outreach strategies to train social media users to recognize the malicious use of AI in information warfare.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • How Machine Learning Can Enhance Remote Patient Monitoring

    How Machine Learning Can Enhance Remote Patient Monitoring

  • Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

    Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

  • Europe : Analysis of the Proposal for an AI Regulation

    Europe : Analysis of the Proposal for an AI Regulation

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

    Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

  • Making Kin with the Machines

    Making Kin with the Machines

  • Automating Informality: On AI and Labour in the Global South (Research Summary)

    Automating Informality: On AI and Labour in the Global South (Research Summary)

  • The State of AI Ethics Report (June 2020)

    The State of AI Ethics Report (June 2020)

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.