• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinformation and Rebuttals Disseminated via Social Media

June 8, 2020

Dr. Marianna Ganapini (@MariannaBergama) is an Assistant Professor of Philosophy at Union College.

*Authors of original paper & link at the bottom


In the current information environment, fake news and disinformation are spreading and solutions are needed to contrast the effects of the dissemination of inaccurate news and information. In particular, many worry that online disinformation – intended as the intentional dissemination of false information through social media – is becoming a powerful persuasive tool to influence and manipulate users’ political views and decisions. 

Whereas so far research on disinformation has mostly focused on only textual input, this paper taps into a new line of research by focusing on multimedia types of disinformation which include both text and images. Visual tools may represent a new frontier for the spread of misinformation because they are likely to be perceived as more ‘direct’ representations of reality. Accordingly, the current hypothesis is that multimedia information will be more readily accepted and believed than merely textual inputs. And since now images can be easily manipulated, the worry that animates this research is that they will constitute a very powerful tool in future disinformation-campaigns. Therefore, the primary goals of this paper are (1)  to investigate the persuasive power of multimedia online disinformation in the US and (2) to study the effects of journalistic debunking tools against multimedia disinformation. 

In the experimental study conducted in this paper, subjects were all shown false tweets concerning two highly politicized topics: school shootings and refugees. The tweets were either only textual input or text + image, and they would come from established new sources (aka CNN) or from ordinary citizens. In some cases, subjects were shown corrective information: a rebuttal tweet (text + image) from PolitiFact (a popular fact checking software) that disproves the fake tweet content. Subjects were then asked to rate the initial tweets’ credibility and truthfulness. The political and ideological views of the participants were also tracked to establish whether they would influence the participants’ reactions to multimedia disinformation and subsequent debunking strategies. 

The outcomes of this study are the following:

  • The empirical results partially show that multimodal tools are rated as slightly more trustworthy than solely textual inputs. This is likely due to the fact that words are abstract indicators, whereas images provide a seemingly direct representation of reality. So multimedia tweets may appear more truthful and believable than mere textual inputs.
  • The results indicate that fact checkers constitute useful debunking tools to contrast misinformation and disinformation. The positive effects of fact checking were stronger for those whose political and ideological beliefs aligned with the debunked content. That means that users who would typically agree with the content of the false tweets were more affected by the corrective tweets from PolitiFact. This result is in opposition to the expectation of the so-called ‘backfire effect’ (i.e. that contrary evidence not only does not change the mind of partisan users but actually reinforces their preexisting political views). It is however still an open question whether multimodal fact checkers are more effective than simply textual corrective information 
  • The results indicate that the source of the information does not matter: in the study subjects assessed the credibility of the news inputs independently of whether they were from established journalistic sources such as CNN or from ordinary citizens. This result paints a discouraging picture of users’ media literacy skills because it reveals that they are able to distinguish between reliable news sources from unreliable ones.

The paper concludes with two recommendations. First, fact checkers should be widely used as journalistic tools as they are effective ways to debunk false information online. What’s more, the paper highlights the importance of media literacy in fostering citizens’ ability to spot misinformation and in educating them to rely on established, reliable news sources. 


Original paper by Michael Hameleers, Thomas E. Powell, Toni G.L.A. Van Der Meer & Lieke Bos: https://doi.org/10.1080/10584609.2019.1674979

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

    Knowing Your Annotator: Rapidly Testing the Reliability of Affect Annotation

  • Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

    Distributed Governance: a Principal-Agent Approach to Data Governance - Part 1 Background & Core Def...

  • Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

    Target specification bias, counterfactual prediction, and algorithmic fairness in healthcare

  • Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

    Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

  • Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

    Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

    Research Summary: Toward Fairness in AI for People with Disabilities: A Research Roadmap

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

    From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.