🔬 Research Summary by Johanna Walker, AI and society researcher at King’s College London.
[Original paper by Johanna Walker, Gefion Thuermer, Julian Vicens & Elena Simperl]
Overview: Misinformation in its many forms is a substantial and growing problem for society today. Whether financially or ideologically motivated, purveyors of misinformation do not abide by legal, technical, or moral rules. Therefore, new, ludic, narrative, gamified, and artistic approaches are needed. In this paper, we analyze the approaches taken in countering misinformation by 18 AI and machine learning works of art.
Introduction
What do a website that automatically labels photos uploaded by the public [1], a deepfake of Richard Nixon’s speech about the failure of the Apollo 11 mission [2], and a video installation of the faces of 4,000 French police officers [3] have in common? All artworks use machine learning and AI to expose critical questions about data and AI. The first, ImageNet Roulette, was designed to expose the facial recognition biases baked into the training data set of the ImageNet database. In Event of Moon Disaster is a deepfake video created by MIT to raise awareness of what is technologically possible and to enable people to explore their own susceptibility. Capture commented on the asymmetry of power in facial recognition technologies and raised questions about the ‘dataveillance’ culture.
Critical AI art facilitates linking technical systems to the power structures they underpin, enables experiential learning, and, crucially, allows interpretation rather than straightforward explanation. Working with artists and startups creating 18 art projects in the MediaFutures program, we asked, (1) What strategic approaches does AI art take to countering misinformation? (2) Which data, tools, and techniques are utilized? and (3) How does the artistic approach add value to algorithmic approaches to countering misinformation?
In MediaFutures, artists are asked to use data as art material to create works that question the impact of misinformation on individuals and society. Some of these projects use AI to explore and challenge AI, while others use AI/machine learning to explore non-AI contexts. We found that AI-driven artistic interventions allow their creators to use some of the core counter-misinformation techniques of media literacy and fact-checking based on the same AI approaches but also utilize multisensory and emotional tools that have the possibility of reaching a wide range of demographics.
Key Insights
Online misinformation – a huge problem and growing
Misinformation of various types has been around for centuries but has grown as the technologies that enable its spread have grown. This matters as misinformation negatively affects the well-being of individuals, groups, and society in several ways. It can undermine democracy, reduce climate change consensus, exacerbate crises, and even lead to death. Further, the proliferation of ways in which misinformation can be encountered also matters, as repeated exposure to a piece of misinformation boosts its likelihood of being believed. Misinformation is also more compelling when it is delivered in emotional language or designed to be attention-grabbing.
Efforts to counter misinformation online have been hampered by both an age-old truism and a very contemporary concept. On the one hand, humans are emotional beings who respond to storytelling, whether that story is objectively true or not. On the other hand, in certain circles, there is now a reluctance to accept anything as objective fact and a mistrust of experts. This creates fertile ground for the most appealing information to be the most widely shared, regardless of veracity, and for there to be little leverage for counter-arguments.
Countering misinformation – media literacy and fact-checking
The two key approaches that have been taken to stem the spread of misinformation online are fact-checking and the development of media literacy within populations (especially youthful ones). However, the effectiveness of fact-checking as a tool for counteracting misinformation is somewhat undermined by people’s unwillingness to accept corrective fact-checking. While “falsehoods” can be corrected, feelings are more challenging. Media literacy is the development of a set of skills around critical thinking, evaluation strategies, search skills, and knowledge of the news and media industries by an individual.
Critical AI Art – provoking consideration of how technologies support social structures
Artists have worked with AI since the 1970s. Algorithms and art are a current topic of much interest, both in popular culture and academia, fueled by the accessibility of algorithmically generated art using large language models such as Dall-E. However, there is also an emerging field of Critical AI art that seeks to address social tensions arising from technology and to enable a sense of critical distance from the technology.
AI Art Approaches in Misinformation
Most of the artworks in MediaFutures engage with the key existing strategies for countering misinformation online. They largely adhere to media literacy routes but also develop new, clean, verifiable datasets valuable for fact-checking. Many worked with deepfakes to raise awareness of the technology’s capabilities and impact. Only one artist rejected the idea that fact-checking would work as an approach. However, an emergent strategy appeared to be focused on collaborative intelligence, essentially a distributed network where each agent contributes autonomously to problem-solving. This approach has been used in, for instance, participatory democracy.
Responding to the challenge of emotion
One of the challenges for traditional counteraction strategies is that they largely attempt to apply considered and direct approaches to mitigating the effects of often highly emotional misinformation. Many of the artworks in MediaFutures deal with the challenge of emotion head-on, whether by responding emotionally or neutralizing the emotion in the misinformation. Simply by being art and sitting outside of the traditional online arenas of media and social media, art can create a more neutral ground for the discussion of politicized subjects that are vulnerable to post-truth argumentation. One project used a ‘standard’ classifier, clustering, and neural network approaches to detect fake news, but added the element of using emotion to identify to alert people that they had encountered untrustworthy news.
Engaging audiences with narrative
Deliberately trying to engage an audience or making an intervention ‘less boring’ has not necessarily been a key goal of media literacy attempts, many of which work with captive audiences. Artists, however, are experienced in capturing the attention of potential audiences in a noisy world. Through the ability to be multi-sensory (even online, through the use of sound), art has an extra dimension through which to communicate with the audience and be, in the words of one artist, “much more than representations of data.” This is key, as it is this appeal which enables virality. Many of our interviewees discussed narrative as a compelling technique for engaging with their audience or ensuring they engaged with each other. This narrative could then be distributed and consumed via any of the multisensory methods described above, from a brief cartoon to a virtual exhibition of refugee art, but with the ability to appeal to the natural human instinct for storytelling. This reflects the findings that artistic approaches enable interpretation by the audience, requiring engagement rather than one-way instruction.
The artworks frequently demonstrated economy of use with multiple aims but with the audience able to engage with the art at whatever level they felt comfortable. For instance, some artworks offer tools that allow individuals to engage with the artwork simply but then provide an opportunity to engage further, either with other individuals or by taking knowledge from the artwork into other parts of life. In this way, the artwork operates on many levels, as a visual, tactile, or “sonified” experience, an educational tool, and then a tool of active choice or protest against misinformation.
The future of misinformation counteraction?
Therefore, this offers a range of considerations to consider when designing future technologies or interventions against misinformation. First, as data is an established art material, many artists are well-positioned to bring technical and artistic skills to their work, creating highly integrated artworks. We also found that narrative is a powerful tool that can be exploited through the data/art relationship and resists easy binaries. Integrating emotion into the response to emotionally heightened misinformation allows for engagement on a more equal footing, which may help reduce the inequity of virality. We also find the idea of engaging with misinformation not before or post-exposure but synchronously via collaborative and participatory opportunities for engagement to be compelling and worthy of more investigation.
Between the lines
Art is an instinctively intuitive experience, and the emotional response we have to good art can make it both compelling and an effective form of communication about complex concepts such as datafication and AI. By incorporating the narrative techniques used by misinformation campaigns into AI techniques to counter them, we can fight fire with fire.
This is a great augmentation to existing approaches to tackle misinformation. Further exploratory research is needed to determine if and when such approaches work for any type of dataset, AI technology, or form of misinformation.
References
[1] https://www.chiark.greenend.org.uk/~ijackson/2019/ImageNet-Roulette-cambridge-2017.html
[2] https://arts.mit.edu/in-event-of-moon-disaster/
[3] https://www.theartnewspaper.com/2020/12/04/censored-work-showing-faces-of-4000-french-police-officers-goes-on-show-in-berlin