• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

June 17, 2023

🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Lucas T. Kweilin]


Overview: Deepfake technology’s increasing ease and availability indicates a worrying trend in technology-facilitated sexual abuse. This article argues that while deepfake technology poses a risk to women in general, victims of domestic abuse are at particular risk because perpetrators now have a new means to threaten, blackmail, and abuse their victims with non-consensual, sexually explicit deepfakes.  


Introduction

In 2018, a video featuring Barack Obama making derogatory statements, swearing, and acting out of character widely circulated online. This deepfake Obama video warned viewers of the dangers of deepfake technology and urged viewers to be “more vigilant with what we trust from the internet.” While most discussions on the harms of deepfake technology focus on its misuse to manipulate elections, spread misinformation, alter public opinion, and threaten national security, there has been considerably less attention to the harms of non-consensual sexual deepfakes, which constitute the majority of deepfakes shared on the internet. Indeed, a study conducted by Sensity AI found that only 35 videos featured politicians, whereas 96% of deepfakes were non-consensual sexual deepfakes—most of which (99%) were made of women. 

In this article, author Lucas T. Kweilin examines the harms that deepfake technology poses to women—particularly those who are victims of domestic violence—and proposes that non-consensual deepfakes and other types of image-based abuse can be best regulated under a uniform federal law rather than through inconsistent and unenforceable state laws.

Key Insights

Beyond Public Figures: Non-Consensual Sexual Deepfakes

Deepfake technology uses artificial intelligence and facial mapping knowledge to merge, combine, replace, and superimpose images and video clips, creating authentic-looking videos known as deepfakes. Some of the earliest non-consensual sexual deepfakes posted online featured various female celebrities, including Taylor Swift, Scarlett Johansson, Gal Gadot, and Kristen Bell. 

Since then, the author argues, deepfake applications have extended beyond celebrity and political-figure sexual deepfakes. This technology is often employed to produce manipulated pornographic videos of ordinary women and girls without their consent. Although it is difficult to determine the prevalence rate of non-consensual sexual deepfakes, studies have found that deepfake apps have been used to generate fake nude images of more than 68,000 women. Other studies have found that non-consensual sexual deepfakes have targeted at least 100,000 people, including underage children. More recently, TikTok users have been targeted. As the author emphasizes, “the targeting of TikTok is especially concerning because nearly a third of users are under the age of 14, and some have already found videos of themselves to appear on websites like PornHub.”

False Realities, Real Harms: Deepfakes and Domestic Violence 

The author categorizes deepfake technology to create non-consensual sexual deepfakes as violence against women, arguing that deepfakes provide a relatively new means to perpetuate domestic violence. Domestic violence is violence between people who have or have had an intimate relationship. 

When a current or former intimate partner attempts to control or dominate a relationship by initiating physical, sexual, or psychological abuse on their victim, such behavior is known as intimate partner violence (IPV). As the author emphasizes, domestic violence goes beyond physical violence. It can involve a pattern of domination or coercive control to undermine the victim’s autonomy, social support, equality, and dignity. 

Image-based sexual abuse describes the nature and extent of abuse experienced by victims (primarily women) where perpetrators have fabricated or distributed private sexual images without their consent. According to the author, deepfakes are used against women similarly to other image-based sexual abuse in that they strip women of their sexual autonomy.  

While anyone can become a victim of non-consensual sexual deepfakes, even without a real compromising image, the author argues that victims of domestic abuse are particularly vulnerable. Indeed, perpetrators now have an endless platform to control, blackmail, intimidate, harass, and abuse their victims. It is also common for perpetrators to disseminate, or threaten to disseminate, compromising media to the victim’s family, friends, employers, coworkers, and peers. 

A Gap in Legislation: Progress and Limitations 

Fortunately, deepfakes have forced lawmakers to pay closer attention to technology-facilitated abuse. California and Virginia were among the first states in the United States to impose criminal penalties on those convicted of distributing non-consensual deepfake images and videos. Other states, including Illinois, Texas, Washington, and California, have also adopted biometric privacy laws that allow people to take civil action against anyone who uses their identifiable images without their consent. 

However, as the author highlights, despite two previous attempts to introduce bills at the federal level, such as the ENOUGH Act from 2017 and the SHIELD Act from 2019, no federal laws protect victims of non-consensual pornography. States are also ill-equipped to handle cases of non-consensual pornography effectively. 

In fact, a recent study found that the states that have enacted statutes that regulate non-consensual pornography use inconsistent language, fail to provide comprehensive protection for victims, and do not hold producers and distributors accountable, thereby allowing them to evade consequences. 

Moreover, deepfakes evade most state revenge porn laws because it is not the victim’s nudity depicted in the videos, which exempts those types of scenarios from prosecution. As well, if deepfakes are created for monetary gain, attention, or clout, rather than for revenge, it is possible that the state’s non-consensual pornography statute does not apply. 

Between the lines

This paper highlights the urgent need for lawmakers to adapt and update legislation to keep pace with technological advancements and address the unique challenges that deepfake technology poses to women, particularly victims of domestic abuse. While AI technologies can reveal the gaps in our existing laws, it is up to legislators, policymakers, and government officials not to let those abuses fall through the cracks.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

related posts

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

    Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Col...

  • Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

    Exploring Collaboration Mechanisms for LLM Agents: A Social Psychology View

  • Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

    Ethics-based auditing of automated decision-making systems: intervention points and policy implicati...

  • Mapping the Ethicality of Algorithmic Pricing

    Mapping the Ethicality of Algorithmic Pricing

  • Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

    Our Top-5 takeaways from our meetup “Protecting the Ecosystem: AI, Data and Algorithms”

  • Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

    Operationalising the Definition of General Purpose AI Systems: Assessing Four Approaches

  • International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

    International Human Rights, Artificial Intelligence, and the Challenge for the Pondering State: Time...

  • Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

    Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

  • Defining organizational AI governance

    Defining organizational AI governance

  • Beyond the Frontier: Fairness Without Accuracy Loss

    Beyond the Frontier: Fairness Without Accuracy Loss

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.