🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.
[Original paper by Peter Mantello, Tung Manh Ho, and Lena Podoletz]
Overview: Conversational AI bots are now augmenting and substituting human efforts across various fields, including advertising, finance, mental health counseling, dating, and wellness. This paper suggests that even Jihadis and neo-Nazis may not be safe from the AI job takeover. As conversational bots continue to evolve, the authors warn that they have become a vital radicalization and recruitment tool for extremist organizations.
Introduction
Following the aftermath of 9/11, a now-defunct chatbot was launched as a cost-effective measure to handle the sudden interest and overwhelming traffic to the US military recruitment website. The chatbot, Sergeant Star, responded to various inquiries, including scholarship information, length of training, and career advancement. Since then, the authors argue, social bots have gained traction as a tool for strategic communication and indoctrination efforts by various extremist organizations, including jihadis and neo-Nazi groups.
Key Insights
AI-Powered Propaganda: Manufacturing Consent for Collective Violence
While propaganda has, historically, played a significant role in manufacturing consent for collective violence through various mediums such as prose, poetry, music, print media, radio, film, and television, the authors argue that its contemporary manifestation is “deeply hard-wired into modern networks of communication, often cloaked in unseen algorithms and automated agents.” These algorithmic agents-provocateurs can perform various disruptive functions. For instance, a “noise” bot can disrupt communication by flooding social media sites with thousands of adversarial posts, thereby diluting the impact of opposing content. However, deploying bots to drown out opposing political conversations or voices of dissent is not exclusive to extremists or authoritarian regimes. Indeed, a 2019 study by Oxford researchers Bradshaw & Howard found evidence of psycho-social manipulation campaigns by autonomous artificial agents taking place in 70 countries, an increase from 48 countries in 2018 and 28 countries in 2017. As the authors put it, “AI has allowed for a paradigm shift to occur in how political propaganda is constructed, negotiated, and ultimately legitimated.” In what follows, the authors investigate and analyze, using a biopolitical lens, the malicious use of social bots as an emerging psychological tool for extremist recruitment.
Disembodied Connections and Affective Bonding
As social relationships move past the necessity of physical proximity, especially in the context of online radicalization, and as bots become increasingly intelligent conversationalists, people will encounter greater difficulty distinguishing between their digital human-human relationships and human-machine relationships. Thus, according to the authors, it is no surprise that bots have become an important tool for “affective bonding. ” Affective bonding refers to the “psycho-physical phenomenon where an individual develops an emotional relationship with another person or group.” More significantly, affective bonding is critical to radicalization.
On an organizational level, radical groups augment their strategic communications by incorporating bots into their operations, creating an illusion of a sophisticated organizational framework. This, in turn, encourages positive emotions (pride, joy, respect, and belonging) while equally generating negative feelings (hate, anger, and disgust) for out-groups. To illustrate, the authors use the example of the IS infomercial “Harvest of the Soldiers,” a bot-driven video depicting battlefield statistics of recent kills and victories, interspersed with the recurring figure of an Arab warrior on horseback sent weekly through private Telegram channels of IS supporters.
Furthermore, extremist bots can perform various functions to identify susceptible targets and connect them to official human recruiters or other supporters. To further strengthen affective bonding at an organizational level, extremist bots can perform various other functions to identify susceptible targets and connect them with official human recruiters or other supporters. Just like phishing bots can harvest social media users’ information and then attempt to persuade them to add a new neighbor to their contact lists, extremist bots can connect potential recruits/users who share similar interests but have not engaged with each other. For instance, a friending bot might identify and match small groups of social media users whose profile an algorithm deems to have a “high likelihood of being radicalized.” In this way, the authors claim, the friending bot would help initiate secure communications among like-minded users who might receive private introductions to a recruiter.
From Assistants to Recruiters: The Future of Online Radicalization
Similarly to mental health counseling bots like “Eliza” or “Woebot,” extremist bots can connect emotionally with newcomers by linking them with like-minded people and then handing them to a human recruiter for potential recruitment. So effective, in fact, that iN2 researchers suggest that the quick response and upbeat tone of social bots’ responses to applicant queries “heightened individual’s motivation to enlist.” They noted that the bots facilitate a stronger emotional connection in the recruitment process, making applicants feel more at ease and less suspicious than when dealing with a real person. As bots continue to evolve, the authors warn that software robots will assume a prominent role in online radicalization. In addition to greater reach and heightened levels of anonymity, safety, and security for those joining extremist groups, bots may also empower unaffiliated sympathizers to participate in acts of violence.
Countering Radicalization: Progress and Limitations
Since 2015, social media companies and security agencies have taken the initiative, using both AI and human interlocutors, to manage the influx of jihadi bots in online radicalization. For instance, in 2018, the European Union’s law enforcement agency, Europol, organized “Referral Action Day,” an annual campaign to detect, disrupt, and delete digital content and social media accounts supporting violent extremism. Nevertheless, the extremist bots’ speed, scalability, and resilience make the governments’ and companies’ counter-radicalization efforts challenging. Indeed, as the authors suggest, despite continuing efforts, social media platforms remain a haven for other extremist communications, outreach, recruiting, and other activities, as they are constantly adapting their bot warfare to outmaneuver and outwit algorithmic content moderation systems.
Between the lines
While scholars in the mental health field have taken a keen interest in software robots’ empathetic dimensions, very little attention has been given to their increasing role in radicalization. Given that new technologies are rapidly blurring the lines between digital human-human relationships and human-machine relationships, we ought to seriously direct our efforts to prevent further bot attacks and develop outreach strategies to train social media users to recognize the malicious use of AI in information warfare.