• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics

May 28, 2023

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Yu-Leung Ng]


Overview: Does the trustworthiness of the politician as the subject of a deepfake video affect how trustworthy we think the video is? Would a description of a deepfake that accompanies a video help? This study explores just how testing deepfakes can be, with the crux of the learning found in the public perception of the politician.


Introduction

Deepfakes are digitally-tinkered videos, using neural networks to feed on video data to impersonate the video subjects. They have the potential to weaken a shared sense of social reality as well as be used in vengeful ways (such as in making sexual content). However, it has also been found that participants in deepfake studies are surprisingly adept at identifying the deepfake video. Hence, this study focuses on deepfakes aiming to impersonate politicians.

Comprising 631 participants, the study segmented them into four different areas. Here, 148 participants were faced with deepfake videos with a description (of what a deepfake is), 157 with deepfake videos without a description, 143 with real videos with a description, and 153 real videos without a description. For each one, a deepfake and real Trump video and a deepfake and real Obama video were shown. After each video, participants were asked about the fakeness of the video and to what extent the video is positive (trustworthy) and negative (dangerous). Participants were asked to rate on a scale of 1-7 (with 1 as strongly disagree, 4 being neutral, and 7 being strongly agree) the statements that came after each video, such as ā€œThis video is fakeā€ and ā€œWhat Mr. Trump said in this video is fake.ā€ 

Key Insights

Within the study, error management theory explains the four possible outcomes of a decision:

  1. person A makes a decision based on true information (true positive), 
  2. person A does not make a decision as the information they have is false (true negative), 
  3. person A makes a decision based on false information (false positive, type 1 error), 
  4. person A does not make a decision even when the information is true (false negative, type 2 error).

The error management theory explains that the last two errors are generally made under uncertainty. The error management theory highlights how humans usually opt for the decision, bringing about the least costly error when unsure of the correct option.

With this in mind, the authors distinguished two types of error: a type 1 error consists of treating a video with negative content as true (false positive). A type 2 error involves not trusting a trustworthy source (false negative). Consequently, each error would be operationalized through 4 different hypotheses:

Type 1 errors

  • Hypothesis 1: the more negative the content of a video (without a description) is, the more likely it will be perceived as fake in terms of a) its video and b) its message.
  • Hypothesis 2: participants will view a negative video with a description as fake regarding a) its video and b) its message compared to positive videos.

Type 2 errors

  • Hypothesis 3: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video is a deepfake.
  • Hypothesis 4: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video itself is a deepfake and has a description.

Results

  • The findings supported hypotheses 1, 2, and 3, but not 4.
  • Participants can decipher a deepfake from a real video – participants were able to identify the fake Trump and Obama messages and videos. However, type 1 and type 2 errors are still made.
  • Deepfake messages from Obama, alongside deepfake videos and messages from Trump, were accurately labeled as fake.
  • Defining deepfakes aids participants in identifying fake videos and messages.
  • Error management theory correctly predicted that participants would choose the least costly option when considering whether a message or video is fake.
  • The perceived danger of a politician is a prominent factor in generating a deepfake label from participants.

Between the lines

With these points in mind, the most salient point comes as the following:

  • Messages and videos are considered real when the video subject is perceived as trustworthy.

Despite the potential for deepfake description labels and knowledge of their perceived dangers, knowing that anyone can produce a deepfake, or have a deepfake created about them, is essential. The more trustworthy someone is, the greater the danger of a deepfake involving them has of being seen as authentic.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Challenges of AI Ethics (Presentation at Brookfield Institute)

    Challenges of AI Ethics (Presentation at Brookfield Institute)

  • Artificial Intelligence as a Force for Good

    Artificial Intelligence as a Force for Good

  • A 16-year old AI developer's critical take on AI ethics

    A 16-year old AI developer's critical take on AI ethics

  • Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

    Impacts of Differential Privacy on Fostering More Racially and Ethnically Diverse Elementary Schools

  • AI Deception: A Survey of Examples, Risks, and Potential Solutions

    AI Deception: A Survey of Examples, Risks, and Potential Solutions

  • 5 Questions & Answers from StradigiAI's Twitter Roundtable

    5 Questions & Answers from StradigiAI's Twitter Roundtable

  • Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

    Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

  • Reflections from Microsoft's Ignite The Tour

    Reflections from Microsoft's Ignite The Tour

  • Not Quite ā€˜Ask a Librarian’: AI on the Nature, Value, and Future of LIS

    Not Quite ā€˜Ask a Librarian’: AI on the Nature, Value, and Future of LIS

  • ā€œWelcome to AIā€; a talk given to the Montreal Integrity Network

    ā€œWelcome to AIā€; a talk given to the Montreal Integrity Network

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.