• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Research Summary: The cognitive science of fake news

September 13, 2020

Summary contributed by Andrew Buzzell, a PhD student in Philosophy at York University.

*Author & link to original paper at the bottom.


Mini-summary: How many people are sincerely fooled by fake news? A moment’s reflection reminds us that we often express attitudes towards propositions that resemble belief, but which aren’t quite the same, and instead signal approval, encouragement, or aspiration, or even mockery. The mainstream account of the psychology of fake news, such as it is given the infancy of this area of study, explains the high level of self-reported belief in fake news as the result of partisan motivated reasoning. This paper, however, challenges both the viability of accepting self-reports at face-value, and raises several challenges to the motivated reasoning explanation.

The paper focuses on three core questions:

  1. To what extent do we really believe fake news?
  2. What explains this belief?
  3. How can we mitigate harms?

Full summary:

Do people believe fake news?

We tend to study fake news by surveying people, and yet there are known challenges determining beliefs from behaviour – there are studies that show behaviour often fails to track assessed political beliefs, and the assertion of political beliefs is often a form of cheerleading rather than sincere agreement, a phenomenon that has been called “expressive responding” (Berinsky, 2018; Bullock et al., 2015). There is empirical evidence that reports of political belief frequently are instances of expressive response.

Other challenges to self-reporting are the extent to which motivated inference affects our responses – where we use heuristics and biased sampling to engage in belief construction in the context of the survey or interaction in which the belief is sampled.

There are substantial challenges to determining the real extent to which people truly believe fake news.

What explains this belief?

A tempting form of explanation for belief in fake news is the deficit model, that given limited cognitive and epistemic resources we become susceptible, but empirical evidence shows that similar deficits do not yield similar tendencies to believe fake news when there is a partisan framing. Kahan (2016, 2017) argues that we can explain this by appealing to identity protective cognition – the problem isn’t a limitation of our cognitive resources, but the values that inform our deployment of them. The paper assesses empirical evidence supporting and challenging this view and suggests that this is still an open question.

How might belief in (and spread of) fake news be prevented or reduced?

There is a substantial empirical literature on the efficacy of correction and the perseverance of belief in the face of interventions such as fact-checking and warning labels. These efforts can have there kinds of negative consequences:

  1. backfire effects: the presentation of corrective information can result in increased belief in the false proposition, however, there is conflicting evidence for the strength and prevalence of this effect.
  2. implied truth effects: fake news that is not labelled or corrected becomes more convincing, a significant problem given the challenges of deploying corrective measures at internet-scale.
  3. tainted truth effects: erroneous corrective efforts can reduce belief in veridical news

Another kind of intervention tries to nudge the consumer of news into a cognitive state that is less likely to be influences by identity protection and motivation, either by inducing deliberation (Bago et al., 2020) or nudging the consumer to evaluate content in terms of its accuracy (Pennycook et al., 2020. This approach has some evidence that demonstrates efficacy. Inoculation theory is another approach to preventing belief in fake news, by exposing them to less persuasive forms of it, for example in the form of games.

Summing up

The article concludes its survey of the cognitive science of fake news by observing that even where we might find some evidence that analytic cognition reduces belief in fake news, there are further questions as to the relation between belief and the behaviour of sharing and distributing it. Empirical research on the relation between credence and sharing behaviour is inconclusive.

A particularly interesting takeaway is the need for researchers to critically appraise their laboratory results, and, in particular, to attend to more nuances propositional attitudes we can adopt towards news, such as cheerleading, trolling, and other forms of expressive response.

References:

Bago, B., Rand, D.G., Pennycook, G., 2020. Fake news, fast and slow: Deliberation reduces belief in false (but not true) news headlines. J Exp Psychol Gen. https://doi.org/10.1037/xge0000729

Berinsky, A.J., 2018. Telling the Truth about Believing the Lies? Evidence for the Limited Prevalence of Expressive Survey Responding. The Journal of Politics 80, 211–224. https://doi.org/10.1086/694258

Bullock, J.G., Lenz, G., 2019. Partisan Bias in Surveys. Annual Review of Political Science 22, 325–342. https://doi.org/10.1146/annurev-polisci-051117-050904


Original paper by Levy, N. L., & Ross, R. M.: https://psyarxiv.com/3nuzj/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • “Welcome to AI”; a talk given to the Montreal Integrity Network

    “Welcome to AI”; a talk given to the Montreal Integrity Network

  • Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

    Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

  • Open-source provisions for large models in the AI Act

    Open-source provisions for large models in the AI Act

  • Fair Interpretable Representation Learning with Correction Vectors

    Fair Interpretable Representation Learning with Correction Vectors

  • Algorithmic Impact Assessments – What Impact Do They Have?

    Algorithmic Impact Assessments – What Impact Do They Have?

  • GAM(e) changer or not? An evaluation of interpretable machine learning models

    GAM(e) changer or not? An evaluation of interpretable machine learning models

  • CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

    CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

  • FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

    FeedbackLogs: Recording and Incorporating Stakeholder Feedback into Machine Learning Pipelines

  • The path toward equal performance in medical machine learning

    The path toward equal performance in medical machine learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.