• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

January 13, 2025

🔬 Research Summary by Giuliana Luz Grabina, a McGill University philosophy alumna, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Minyoung Moon]


Overview: South Korea has a dark history of digital sex crimes. South Korean police reported a significant rise in online deepfake sex crimes, with 297 cases documented in the first seven months of 2024. This marks a sharp increase from 180 cases reported throughout 2023 and 160 in 2021. This paper draws on the development and diversification of gender-based violence, aided by evolving digital technologies, in the Korean context. Additionally, it explores how Korean women have responded to pervasive issues of digital sex crimes and online misogyny with the support of an increasing population of digital feminists.


Introduction

In August 2024, investigators uncovered multiple Telegram chatrooms, totalling over 220,000 members, that featured non-consensual sexually explicit deepfakes of female university students, as well as high school and middle school students. Users, mainly teenage students, would upload photos of people they knew—both classmates and teachers—and other users would then turn them into sexually explicit deepfake images. 

In an emergency meeting, President Yoon called for a swift crackdown on these Telegram chatrooms. Yet, many remained skeptical of President Yoon’s ability and commitment to adequately address South Korea’s growing deepfake porn epidemic. 

In this article, Minyoung Moon examines how South Korea’s rampant misogyny problem fuels gender-based digital sex crimes, making it the cyber sex crime capital.

Tech-Facilitated Sex Crimes: From Motel Rooms to the Nth Room

Prior to the internet, technology-facilitated sex crimes primarily revolved around the secret filming of sexual encounters in motel rooms and the offline distribution of such videos. Since the advent of the internet, sexual crimes have increased dramatically. For instance, in 2005, digital sex crimes accounted for only 3.6 percent of the overall sex crimes, but by 2015, that percentage had increased up to 24.9 percent.

In 2020, the “Nth Room incident,” which involved the sexual exploitation of women—including sixteen minors—and saw several dozen women lured through fake job advertisements to participate in the creation of sexually exploitative material, sparked national attention. These videos and pictures, Moon argues, were then sold to customers using the Telegram messenger app. Shockingly, it was reported that approximately 60,000 individuals were engaged in the production, distribution, or possession of the content.

South Korea’s Misogyny Problem

In a 2018 survey conducted on women’s experience of online misogyny, a staggering 97 percent of the survey respondents reported being exposed to misogynistic content online.  Moon argues that this alarming statistic underscores the deeply entrenched nature of online misogyny in Korean society, particularly within the notorious online community Ilbe, which has been widely condemned for perpetuating hatred toward Korean women. Founded in 2010, Ilbe—short for “Ilgan Best,” meaning “daily best” in English—is a platform that aggregates and shares posts deleted or censored from DC Inside, one of Korea’s largest online community websites. 

Like Reddit, DC Inside allows anonymous users to exchange information and participate in themed galleries, which function like subreddits. This environment fosters a variety of digital subcultures, predominantly male-dominated, where memes and internet slang often originate. Ilbe users perpetuate misogynistic discourses by portraying young Korean women as self-centered and demanding excessive rights. Ilbe members also attribute many social problems, even those remotely related to gender, to the perceived selfishness of Korean women. Despite being widely regarded as an extreme online hate group, Moon argues that Ilbe’s substantial user base—peaking at up to 40,000 daily users in 2015—underscores its significant social influence. This antagonistic view toward women has become increasingly widespread, particularly among Korean youth. A 2021 survey focusing on this demographic revealed that women were the primary targets of hate speech.

Online misogyny transcends digital harassment and manifest as gender-based violence in offline settings. The 2016 murder case at Gangnam Station illustrates this connection: a man brutally stabbed a female stranger in a public restroom, admitting he targeted her because he felt ignored by women, thereby underscoring misogyny as his motive. This incident triggered an unprecedented rally of Korean women near Gangnam Station, symbolizing their deep-seated grievances against pervasive online misogyny.

Digital Activism

In response, Korean feminists established Megalia, a platform created by digital feminists well-versed in online subcultures and determined to combat online misogyny. Megalia adopted a playful and sarcastic language style that mirrored misogynistic rhetoric, specifically targeting men as the object of ridicule. Despite facing criticism for its misandrist language, Moon argues that Megalia played a crucial role in leading campaigns against technology-facilitated gender-based violence in Korea.

A notable achievement was shutting down Soranet in 2016, Korea’s largest illegal pornographic website. Soranet served as a popular platform for sharing and viewing illegally filmed videos of women. Megalia condemned Soranet’s criminality, which led to collaboration with a feminist politician. This collaboration initiated a police investigation and resulted in the arrest of the platform’s administrators. Additionally, Megalia helped reframe the 2016 Gangnam Station murder as a manifestation of widespread misogyny rather than a crime committed by a schizophrenic individual.

Between the lines

This paper addresses the urgent need for a shift in societal attitudes to effectively tackle tech-facilitated sex crimes. Legal reforms are essential but must be accompanied by changes in how we view and address misogyny. The rise in cases in South Korea—from 160 in 2021 to 297 in the first seven months of 2024—shows how deep-seated misogyny and gender-based violence underpin this crisis. This issue is global, and Korean feminists are calling for international awareness and action, urging the global community to highlight the recent deepfake scandal involving over 220,000 members in Telegram chatrooms. Without addressing the root cause of digital sex crimes, policy changes alone will be insufficient.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

    CodeAid: Evaluating a Classroom Deployment of an LLM-based Programming Assistant that Balances Stude...

  • Why reciprocity prohibits autonomous weapons systems in war

    Why reciprocity prohibits autonomous weapons systems in war

  • Moral Zombies: Why Algorithms Are Not Moral Agents

    Moral Zombies: Why Algorithms Are Not Moral Agents

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

    FairQueue: Rethinking Prompt Learning for Fair Text-to-Image Generation (NeurIPS 2024)

  • The State of AI Ethics Report (Volume 5)

    The State of AI Ethics Report (Volume 5)

  • From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

    From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biase...

  • Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

    Does Military AI Have Gender? Understanding Bias and Promoting Ethical Approaches in Military Applic...

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • Unsolved Problems in ML Safety

    Unsolved Problems in ML Safety

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.