• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

An error management approach to perceived fakeness of deepfakes: The moderating role of perceived deepfake targeted politicians’ personality characteristics

May 28, 2023

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Yu-Leung Ng]


Overview: Does the trustworthiness of the politician as the subject of a deepfake video affect how trustworthy we think the video is? Would a description of a deepfake that accompanies a video help? This study explores just how testing deepfakes can be, with the crux of the learning found in the public perception of the politician.


Introduction

Deepfakes are digitally-tinkered videos, using neural networks to feed on video data to impersonate the video subjects. They have the potential to weaken a shared sense of social reality as well as be used in vengeful ways (such as in making sexual content). However, it has also been found that participants in deepfake studies are surprisingly adept at identifying the deepfake video. Hence, this study focuses on deepfakes aiming to impersonate politicians.

Comprising 631 participants, the study segmented them into four different areas. Here, 148 participants were faced with deepfake videos with a description (of what a deepfake is), 157 with deepfake videos without a description, 143 with real videos with a description, and 153 real videos without a description. For each one, a deepfake and real Trump video and a deepfake and real Obama video were shown. After each video, participants were asked about the fakeness of the video and to what extent the video is positive (trustworthy) and negative (dangerous). Participants were asked to rate on a scale of 1-7 (with 1 as strongly disagree, 4 being neutral, and 7 being strongly agree) the statements that came after each video, such as ā€œThis video is fakeā€ and ā€œWhat Mr. Trump said in this video is fake.ā€ 

Key Insights

Within the study, error management theory explains the four possible outcomes of a decision:

  1. person A makes a decision based on true information (true positive), 
  2. person A does not make a decision as the information they have is false (true negative), 
  3. person A makes a decision based on false information (false positive, type 1 error), 
  4. person A does not make a decision even when the information is true (false negative, type 2 error).

The error management theory explains that the last two errors are generally made under uncertainty. The error management theory highlights how humans usually opt for the decision, bringing about the least costly error when unsure of the correct option.

With this in mind, the authors distinguished two types of error: a type 1 error consists of treating a video with negative content as true (false positive). A type 2 error involves not trusting a trustworthy source (false negative). Consequently, each error would be operationalized through 4 different hypotheses:

Type 1 errors

  • Hypothesis 1: the more negative the content of a video (without a description) is, the more likely it will be perceived as fake in terms of a) its video and b) its message.
  • Hypothesis 2: participants will view a negative video with a description as fake regarding a) its video and b) its message compared to positive videos.

Type 2 errors

  • Hypothesis 3: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video is a deepfake.
  • Hypothesis 4: positive video targets motivate the participants to perceive a) the video and b) the message as trustworthy, even when the video itself is a deepfake and has a description.

Results

  • The findings supported hypotheses 1, 2, and 3, but not 4.
  • Participants can decipher a deepfake from a real video – participants were able to identify the fake Trump and Obama messages and videos. However, type 1 and type 2 errors are still made.
  • Deepfake messages from Obama, alongside deepfake videos and messages from Trump, were accurately labeled as fake.
  • Defining deepfakes aids participants in identifying fake videos and messages.
  • Error management theory correctly predicted that participants would choose the least costly option when considering whether a message or video is fake.
  • The perceived danger of a politician is a prominent factor in generating a deepfake label from participants.

Between the lines

With these points in mind, the most salient point comes as the following:

  • Messages and videos are considered real when the video subject is perceived as trustworthy.

Despite the potential for deepfake description labels and knowledge of their perceived dangers, knowing that anyone can produce a deepfake, or have a deepfake created about them, is essential. The more trustworthy someone is, the greater the danger of a deepfake involving them has of being seen as authentic.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

    Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

  • The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

    The Unnoticed Cognitive Bias Secretly Shaping the AI Agenda

  • Talk at AI for Good Global Summit 2019

    Talk at AI for Good Global Summit 2019

  • Montreal AI Symposium Presentation at Polytechnique

    Montreal AI Symposium Presentation at Polytechnique

  • Response to Scotland's AI Strategy

    Response to Scotland's AI Strategy

  • Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

    Research summary: Suckers List: How Allstate’s Secret Auto Insurance Algorithm Squeezes Big Spenders

  • Not Quite ā€˜Ask a Librarian’: AI on the Nature, Value, and Future of LIS

    Not Quite ā€˜Ask a Librarian’: AI on the Nature, Value, and Future of LIS

  • Research summary: Evasion Attacks Against Machine Learning at Test Time

    Research summary: Evasion Attacks Against Machine Learning at Test Time

  • 5 Questions & Answers from StradigiAI's Twitter Roundtable

    5 Questions & Answers from StradigiAI's Twitter Roundtable

  • Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

    Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.