• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media

June 8, 2020

Dr. Marianna Ganapini (@MariannaBergama) is an Assistant Professor of Philosophy at Union College.

*Authors of original paper & link at the bottom


In the current information environment, fake news and disinformation are spreading and solutions are needed to contrast the effects of the dissemination of inaccurate news and information. In particular, many worry that online disinformation – intended as the intentional dissemination of false information through social media – is becoming a powerful persuasive tool to influence and manipulate users’ political views and decisions. 

Whereas so far research on disinformation has mostly focused on only textual input, this paper taps into a new line of research by focusing on multimedia types of disinformation which include both text and images. Visual tools may represent a new frontier for the spread of misinformation because they are likely to be perceived as more ‘direct’ representations of reality. Accordingly, the current hypothesis is that multimedia information will be more readily accepted and believed than merely textual inputs. And since now images can be easily manipulated, the worry that animates this research is that they will constitute a very powerful tool in future disinformation-campaigns. Therefore, the primary goals of this paper are (1)  to investigate the persuasive power of multimedia online disinformation in the US and (2) to study the effects of journalistic debunking tools against multimedia disinformation. 

In the experimental study conducted in this paper, subjects were all shown false tweets concerning two highly politicized topics: school shootings and refugees. The tweets were either only textual input or text + image, and they would come from established new sources (aka CNN) or from ordinary citizens. In some cases, subjects were shown corrective information: a rebuttal tweet (text + image) from PolitiFact (a popular fact checking software) that disproves the fake tweet content. Subjects were then asked to rate the initial tweets’ credibility and truthfulness. The political and ideological views of the participants were also tracked to establish whether they would influence the participants’ reactions to multimedia disinformation and subsequent debunking strategies. 

The outcomes of this study are the following:

  • The empirical results partially show that multimodal tools are rated as slightly more trustworthy than solely textual inputs. This is likely due to the fact that words are abstract indicators, whereas images provide a seemingly direct representation of reality. So multimedia tweets may appear more truthful and believable than mere textual inputs.
  • The results indicate that fact checkers constitute useful debunking tools to contrast misinformation and disinformation. The positive effects of fact checking were stronger for those whose political and ideological beliefs aligned with the debunked content. That means that users who would typically agree with the content of the false tweets were more affected by the corrective tweets from PolitiFact. This result is in opposition to the expectation of the so-called ‘backfire effect’ (i.e. that contrary evidence not only does not change the mind of partisan users but actually reinforces their preexisting political views). It is however still an open question whether multimodal fact checkers are more effective than simply textual corrective information 
  • The results indicate that the source of the information does not matter: in the study subjects assessed the credibility of the news inputs independently of whether they were from established journalistic sources such as CNN or from ordinary citizens. This result paints a discouraging picture of users’ media literacy skills because it reveals that they are able to distinguish between reliable news sources from unreliable ones.

The paper concludes with two recommendations. First, fact checkers should be widely used as journalistic tools as they are effective ways to debunk false information online. What’s more, the paper highlights the importance of media literacy in fostering citizens’ ability to spot misinformation and in educating them to rely on established, reliable news sources. 


Original paper by Michael Hameleers, Thomas E. Powell, Toni G.L.A. Van Der Meer & Lieke Bos: https://doi.org/10.1080/10584609.2019.1674979

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Making Kin with the Machines

    Making Kin with the Machines

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • Designing for Meaningful Human Control in Military Human-Machine Teams

    Designing for Meaningful Human Control in Military Human-Machine Teams

  • Data Pooling in Capital Markets and its Implications

    Data Pooling in Capital Markets and its Implications

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

    Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

  • An Uncommon Task: participator Design in Legal AI

    An Uncommon Task: participator Design in Legal AI

  • A Virtue-Based Framework to Support Putting AI Ethics into Practice

    A Virtue-Based Framework to Support Putting AI Ethics into Practice

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.