• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

June 17, 2023

🔬 Research Summary by Giuliana Luz Grabina, a philosophy undergraduate student at McGill University, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Lucas T. Kweilin]


Overview: Deepfake technology’s increasing ease and availability indicates a worrying trend in technology-facilitated sexual abuse. This article argues that while deepfake technology poses a risk to women in general, victims of domestic abuse are at particular risk because perpetrators now have a new means to threaten, blackmail, and abuse their victims with non-consensual, sexually explicit deepfakes.  


Introduction

In 2018, a video featuring Barack Obama making derogatory statements, swearing, and acting out of character widely circulated online. This deepfake Obama video warned viewers of the dangers of deepfake technology and urged viewers to be “more vigilant with what we trust from the internet.” While most discussions on the harms of deepfake technology focus on its misuse to manipulate elections, spread misinformation, alter public opinion, and threaten national security, there has been considerably less attention to the harms of non-consensual sexual deepfakes, which constitute the majority of deepfakes shared on the internet. Indeed, a study conducted by Sensity AI found that only 35 videos featured politicians, whereas 96% of deepfakes were non-consensual sexual deepfakes—most of which (99%) were made of women. 

In this article, author Lucas T. Kweilin examines the harms that deepfake technology poses to women—particularly those who are victims of domestic violence—and proposes that non-consensual deepfakes and other types of image-based abuse can be best regulated under a uniform federal law rather than through inconsistent and unenforceable state laws.

Key Insights

Beyond Public Figures: Non-Consensual Sexual Deepfakes

Deepfake technology uses artificial intelligence and facial mapping knowledge to merge, combine, replace, and superimpose images and video clips, creating authentic-looking videos known as deepfakes. Some of the earliest non-consensual sexual deepfakes posted online featured various female celebrities, including Taylor Swift, Scarlett Johansson, Gal Gadot, and Kristen Bell. 

Since then, the author argues, deepfake applications have extended beyond celebrity and political-figure sexual deepfakes. This technology is often employed to produce manipulated pornographic videos of ordinary women and girls without their consent. Although it is difficult to determine the prevalence rate of non-consensual sexual deepfakes, studies have found that deepfake apps have been used to generate fake nude images of more than 68,000 women. Other studies have found that non-consensual sexual deepfakes have targeted at least 100,000 people, including underage children. More recently, TikTok users have been targeted. As the author emphasizes, “the targeting of TikTok is especially concerning because nearly a third of users are under the age of 14, and some have already found videos of themselves to appear on websites like PornHub.”

False Realities, Real Harms: Deepfakes and Domestic Violence 

The author categorizes deepfake technology to create non-consensual sexual deepfakes as violence against women, arguing that deepfakes provide a relatively new means to perpetuate domestic violence. Domestic violence is violence between people who have or have had an intimate relationship. 

When a current or former intimate partner attempts to control or dominate a relationship by initiating physical, sexual, or psychological abuse on their victim, such behavior is known as intimate partner violence (IPV). As the author emphasizes, domestic violence goes beyond physical violence. It can involve a pattern of domination or coercive control to undermine the victim’s autonomy, social support, equality, and dignity. 

Image-based sexual abuse describes the nature and extent of abuse experienced by victims (primarily women) where perpetrators have fabricated or distributed private sexual images without their consent. According to the author, deepfakes are used against women similarly to other image-based sexual abuse in that they strip women of their sexual autonomy.  

While anyone can become a victim of non-consensual sexual deepfakes, even without a real compromising image, the author argues that victims of domestic abuse are particularly vulnerable. Indeed, perpetrators now have an endless platform to control, blackmail, intimidate, harass, and abuse their victims. It is also common for perpetrators to disseminate, or threaten to disseminate, compromising media to the victim’s family, friends, employers, coworkers, and peers. 

A Gap in Legislation: Progress and Limitations 

Fortunately, deepfakes have forced lawmakers to pay closer attention to technology-facilitated abuse. California and Virginia were among the first states in the United States to impose criminal penalties on those convicted of distributing non-consensual deepfake images and videos. Other states, including Illinois, Texas, Washington, and California, have also adopted biometric privacy laws that allow people to take civil action against anyone who uses their identifiable images without their consent. 

However, as the author highlights, despite two previous attempts to introduce bills at the federal level, such as the ENOUGH Act from 2017 and the SHIELD Act from 2019, no federal laws protect victims of non-consensual pornography. States are also ill-equipped to handle cases of non-consensual pornography effectively. 

In fact, a recent study found that the states that have enacted statutes that regulate non-consensual pornography use inconsistent language, fail to provide comprehensive protection for victims, and do not hold producers and distributors accountable, thereby allowing them to evade consequences. 

Moreover, deepfakes evade most state revenge porn laws because it is not the victim’s nudity depicted in the videos, which exempts those types of scenarios from prosecution. As well, if deepfakes are created for monetary gain, attention, or clout, rather than for revenge, it is possible that the state’s non-consensual pornography statute does not apply. 

Between the lines

This paper highlights the urgent need for lawmakers to adapt and update legislation to keep pace with technological advancements and address the unique challenges that deepfake technology poses to women, particularly victims of domestic abuse. While AI technologies can reveal the gaps in our existing laws, it is up to legislators, policymakers, and government officials not to let those abuses fall through the cracks.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Modeling Content Creator Incentives on Algorithm-Curated Platforms

    Modeling Content Creator Incentives on Algorithm-Curated Platforms

  • Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

    Unpacking Invisible Work Practices, Constraints, and Latent Power Relationships in Child Welfare thr...

  • Applying the TAII Framework on Tesla Bot

    Applying the TAII Framework on Tesla Bot

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Technical methods for regulatory inspection of algorithmic systems in social media platforms

    Technical methods for regulatory inspection of algorithmic systems in social media platforms

  • Explaining the Principles to Practices Gap in AI

    Explaining the Principles to Practices Gap in AI

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Deciphering Open Source in the EU AI Act

    Deciphering Open Source in the EU AI Act

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.