• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Aging in an Era of Fake News (Research Summary)

October 5, 2020

Summary contributed by our researcher Alexandrine Royer, who works at The Foundation for Genocide Education.

*Link to original paper + authors at the bottom.


Mini-summary: With the release of Netflix’s Social Dilemma, the upcoming U.S. elections, and persistent COVID-19 conspiracy theorists and deniers, online misinformation has resurfaced in the public debate serious threat to public safety and democracy. Lessons learned from the 2016 U.S. elections showed that older adults were the most prone to sharing fake news, with cognitive decline being the commonly cited explanation for this behaviour.

Brashier and Schacter argue that other factors such as greater trust, difficulty detecting lies, a lower emphasis on accuracy when communicating, and unfamiliarity with social media, are also to consider when accounting for how older generations evaluate news. Reducing fake news shares and increasing digital literacy among older adults is key to maintaining a healthy and informed civic society. Older adults had a 70.9% turnout at the last election compared to 46.1% among millennials. They merit more targeted strategies to effectively reduce the share of fake news online.

Full summary:

Older adults are popularly accused of being the first to fall for fake news. Statistics from the 2016 U.S. elections confirm this widespread belief, with older adult’s Twitter feeds containing the highest counts of fake news. Those aged above 50 vastly overrepresented among fake news “supersharers.” Blaming it on an older adult’s cognitive decline is only part of the answer. As pointed out by the authors, people of every age will use mental shortcuts to evaluate incoming information’s veracity. When seeing false statements repeatedly, it becomes easier to believe them. Older adults are more source-memory deficient, at risk of forgetting the details about the source of information and whether it was fact-checked. If seen numerous times, the original false statement will be fresher and feel truer in their minds (i.e. fluency) than the corrective information accompanying it. Fact-checking information does not necessarily shift people’s belief in fake news. 

We must rethink our strategies for coping with the unrelenting influx of fake news beyond adding corrective fact check measures. The authors point to research by Skurnik et al., which suggests that older adults, when shown statements identified as false repeatedly, tend to paradoxically list them as correct if later asked to evaluate the claim. With their accumulated knowledge throughout the years, older adults will reject statements that go against facts they know about the world. The authors refer to a study by Allcot and Gentzkow (2017), where older and younger adults were presented with fake headlines following the U.S. elections. Older adults performed better than younger counterparts in discerning true versus false headlines at first glance. Their successful performance suggests that information repetition, with viral news stories popping up regularly in their feeds, along with some memory failures, are more likely the cause of older adults’ tendency to believe in fake news. 

Other factors to consider are the makeup of older adults’ social media networks and their goals when using these platforms. As you grow older, your social circle tends to narrow, yet your interpersonal trust in people will grow. Older adults will be more susceptible to bots and questionable pages made to appear as real accounts. They will assume that the information shared by social media by friends or acquaintances is factual unless they are given cues about a person’s character. The paper points to a study by Skurnik et al. that social context, and the character of a given person, can have a longer and more lasting impression than “true or false” tags. Instead of debunking each of Donald Trump’s “alternative facts,” stating that the President averages 15 false claims per day in 2018 may be more beneficial to older adults. The authors also suggest that older adults, when interacting online, may cast aside the questionable factual elements about a statement or article or candidate, to pass on a moral message to their younger followers. Older adults can perform well in analytical thinking tasks, but this may not, along with their own social motivations, guard them against misleading content on social media. 

A final factor to consider is the digital literacy divide. Older adults are still new to the internet, with Americans over 65 using social going up from 8% to 40% in less than a decade. They are still learning their way around social media. Fake or sponsored new stories and manipulated images can be difficult to discern across all ages—only 9% of readers spot sponsored news stories. The authors mention research by Fenn et al. and Derksen et al. that claims which appear alongside photographs, even if they do not confirm the claim, are more likely accepted, and this truthiness effect will persist across the life span. Pictures also tend to incentivize users to share information. Older adults do not purposefully intend to share false information. The authors refer to a study by Pennycook et al. that shows that older adults self-report themselves as less willing to share fake news than their younger counterparts. This discrepancy between their online behaviour and their actual intentions may reflect, according to the authors, a misunderstanding of how algorithms work and what sharing on the platform implicates. 

Brashier and Schacter’s review of psychological studies relating to older adults’ online behaviour should guide all those working in the field of disinformation. While it may be tempting to blame older adults’ cognitive deficiencies as the main culprit for fake news sharing, the reality is much more nuanced and complex. Fake news sharing is likely to intensify in the future with even more sophisticated technology, and in combination with America’s aging population, there will be a growing population susceptible to spreading disinformation. Psychological science will allow us to glean meaningful insights into how to stop the current misinformation crisis with more tailored strategies that account for both the social contexts, the goals and motivations behind older adult’s online behaviour. 


Original paper by Nadia M. Brashier and Daniel L. Schacter: https://www.researchgate.net/publication/341496718_Aging_in_an_Era_of_Fake_News

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Research summary: From Rationality to Relationality: Ubuntu as an Ethical & Human Rights Framework f...

    Research summary: From Rationality to Relationality: Ubuntu as an Ethical & Human Rights Framework f...

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

  • AI Framework for Healthy Built Environments

    AI Framework for Healthy Built Environments

  • Employee Perceptions of the Effective Adoption of AI Principles

    Employee Perceptions of the Effective Adoption of AI Principles

  • The Ethics of Artificial Intelligence through the Lens of Ubuntu

    The Ethics of Artificial Intelligence through the Lens of Ubuntu

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

  • Ethics and Governance of Trustworthy Medical Artificial Intelligence

    Ethics and Governance of Trustworthy Medical Artificial Intelligence

  • Towards User-Guided Actionable Recourse

    Towards User-Guided Actionable Recourse

  • Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

    Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Mode...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.