• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics

May 28, 2023

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Yu-Leung Ng]


Overview: Does the trustworthiness of the politician as the subject of a deepfake video affect how trustworthy we think the video is? Would a description of a deepfake that accompanies a video help? This study explores just how testing deepfakes can be, with the crux of the learning found in the public perception of the politician.


Introduction

Deepfakes are digitally-tinkered videos, using neural networks to feed on video data to impersonate the video subjects. They have the potential to weaken a shared sense of social reality as well as be used in vengeful ways (such as in making sexual content). However, it has also been found that participants in deepfake studies are surprisingly adept at identifying the deepfake video. Hence, this study focuses on deepfakes aiming to impersonate politicians.

Comprising 631 participants, the study segmented them into four different areas. Here, 148 participants were faced with deepfake videos with a description (of what a deepfake is), 157 with deepfake videos without a description, 143 with real videos with a description, and 153 real videos without a description. For each one, a deepfake and real Trump video and a deepfake and real Obama video were shown. After each video, participants were asked about the fakeness of the video and to what extent the video is positive (trustworthy) and negative (dangerous). Participants were asked to rate on a scale of 1-7 (with 1 as strongly disagree, 4 being neutral, and 7 being strongly agree) the statements that came after each video, such as “This video is fake” and “What Mr. Trump said in this video is fake.” 

Key Insights

Within the study, error management theory explains the four possible outcomes of a decision:

  1. person A makes a decision based on true information (true positive), 
  2. person A does not make a decision as the information they have is false (true negative), 
  3. person A makes a decision based on false information (false positive, type 1 error), 
  4. person A does not make a decision even when the information is true (false negative, type 2 error).

The error management theory explains that the last two errors are generally made under uncertainty. The error management theory highlights how humans usually opt for the decision, bringing about the least costly error when unsure of the correct option.

With this in mind, the authors distinguished two types of error: a type 1 error consists of treating a video with negative content as true (false positive). A type 2 error involves not trusting a trustworthy source (false negative). Consequently, each error would be operationalized through 4 different hypotheses:

Type 1 errors

  • Hypothesis 1: the more negative the content of a video (without a description) is, the more likely it will be perceived as fake in terms of a) its video and b) its message.
  • Hypothesis 2: participants will view a negative video with a description as fake regarding a) its video and b) its message compared to positive videos.

Type 2 errors

  • Hypothesis 3: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video is a deepfake.
  • Hypothesis 4: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video itself is a deepfake and has a description.

Results

  • The findings supported hypotheses 1, 2, and 3, but not 4.
  • Participants can decipher a deepfake from a real video – participants were able to identify the fake Trump and Obama messages and videos. However, type 1 and type 2 errors are still made.
  • Deepfake messages from Obama, alongside deepfake videos and messages from Trump, were accurately labeled as fake.
  • Defining deepfakes aids participants in identifying fake videos and messages.
  • Error management theory correctly predicted that participants would choose the least costly option when considering whether a message or video is fake.
  • The perceived danger of a politician is a prominent factor in generating a deepfake label from participants.

Between the lines

With these points in mind, the most salient point comes as the following:

  • Messages and videos are considered real when the video subject is perceived as trustworthy.

Despite the potential for deepfake description labels and knowledge of their perceived dangers, knowing that anyone can produce a deepfake, or have a deepfake created about them, is essential. The more trustworthy someone is, the greater the danger of a deepfake involving them has of being seen as authentic.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • AI Ethics During Warfare: An Evolving Paradox

    AI Ethics During Warfare: An Evolving Paradox

  • Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

    Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

  • Montreal AI Symposium Presentation at Polytechnique

    Montreal AI Symposium Presentation at Polytechnique

  • 5 Questions & Answers from StradigiAI's Twitter Roundtable

    5 Questions & Answers from StradigiAI's Twitter Roundtable

  • Research summary: Evasion Attacks Against Machine Learning at Test Time

    Research summary: Evasion Attacks Against Machine Learning at Test Time

  • Interview with Borealis AI

    Interview with Borealis AI

  • Prompt Middleware: Helping Non-Experts Engage with Generative AI

    Prompt Middleware: Helping Non-Experts Engage with Generative AI

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • Research summary: Politics of Adversarial Machine Learning

    Research summary: Politics of Adversarial Machine Learning

  • How Canada can be a global leader in ethical AI

    How Canada can be a global leader in ethical AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.