• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

January 13, 2025

🔬 Research Summary by Giuliana Luz Grabina, a McGill University philosophy alumna, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Minyoung Moon]


Overview: South Korea has a dark history of digital sex crimes. South Korean police reported a significant rise in online deepfake sex crimes, with 297 cases documented in the first seven months of 2024. This marks a sharp increase from 180 cases reported throughout 2023 and 160 in 2021. This paper draws on the development and diversification of gender-based violence, aided by evolving digital technologies, in the Korean context. Additionally, it explores how Korean women have responded to pervasive issues of digital sex crimes and online misogyny with the support of an increasing population of digital feminists.


Introduction

In August 2024, investigators uncovered multiple Telegram chatrooms, totalling over 220,000 members, that featured non-consensual sexually explicit deepfakes of female university students, as well as high school and middle school students. Users, mainly teenage students, would upload photos of people they knew—both classmates and teachers—and other users would then turn them into sexually explicit deepfake images. 

In an emergency meeting, President Yoon called for a swift crackdown on these Telegram chatrooms. Yet, many remained skeptical of President Yoon’s ability and commitment to adequately address South Korea’s growing deepfake porn epidemic. 

In this article, Minyoung Moon examines how South Korea’s rampant misogyny problem fuels gender-based digital sex crimes, making it the cyber sex crime capital.

Tech-Facilitated Sex Crimes: From Motel Rooms to the Nth Room

Prior to the internet, technology-facilitated sex crimes primarily revolved around the secret filming of sexual encounters in motel rooms and the offline distribution of such videos. Since the advent of the internet, sexual crimes have increased dramatically. For instance, in 2005, digital sex crimes accounted for only 3.6 percent of the overall sex crimes, but by 2015, that percentage had increased up to 24.9 percent.

In 2020, the “Nth Room incident,” which involved the sexual exploitation of women—including sixteen minors—and saw several dozen women lured through fake job advertisements to participate in the creation of sexually exploitative material, sparked national attention. These videos and pictures, Moon argues, were then sold to customers using the Telegram messenger app. Shockingly, it was reported that approximately 60,000 individuals were engaged in the production, distribution, or possession of the content.

South Korea’s Misogyny Problem

In a 2018 survey conducted on women’s experience of online misogyny, a staggering 97 percent of the survey respondents reported being exposed to misogynistic content online.  Moon argues that this alarming statistic underscores the deeply entrenched nature of online misogyny in Korean society, particularly within the notorious online community Ilbe, which has been widely condemned for perpetuating hatred toward Korean women. Founded in 2010, Ilbe—short for “Ilgan Best,” meaning “daily best” in English—is a platform that aggregates and shares posts deleted or censored from DC Inside, one of Korea’s largest online community websites. 

Like Reddit, DC Inside allows anonymous users to exchange information and participate in themed galleries, which function like subreddits. This environment fosters a variety of digital subcultures, predominantly male-dominated, where memes and internet slang often originate. Ilbe users perpetuate misogynistic discourses by portraying young Korean women as self-centered and demanding excessive rights. Ilbe members also attribute many social problems, even those remotely related to gender, to the perceived selfishness of Korean women. Despite being widely regarded as an extreme online hate group, Moon argues that Ilbe’s substantial user base—peaking at up to 40,000 daily users in 2015—underscores its significant social influence. This antagonistic view toward women has become increasingly widespread, particularly among Korean youth. A 2021 survey focusing on this demographic revealed that women were the primary targets of hate speech.

Online misogyny transcends digital harassment and manifest as gender-based violence in offline settings. The 2016 murder case at Gangnam Station illustrates this connection: a man brutally stabbed a female stranger in a public restroom, admitting he targeted her because he felt ignored by women, thereby underscoring misogyny as his motive. This incident triggered an unprecedented rally of Korean women near Gangnam Station, symbolizing their deep-seated grievances against pervasive online misogyny.

Digital Activism

In response, Korean feminists established Megalia, a platform created by digital feminists well-versed in online subcultures and determined to combat online misogyny. Megalia adopted a playful and sarcastic language style that mirrored misogynistic rhetoric, specifically targeting men as the object of ridicule. Despite facing criticism for its misandrist language, Moon argues that Megalia played a crucial role in leading campaigns against technology-facilitated gender-based violence in Korea.

A notable achievement was shutting down Soranet in 2016, Korea’s largest illegal pornographic website. Soranet served as a popular platform for sharing and viewing illegally filmed videos of women. Megalia condemned Soranet’s criminality, which led to collaboration with a feminist politician. This collaboration initiated a police investigation and resulted in the arrest of the platform’s administrators. Additionally, Megalia helped reframe the 2016 Gangnam Station murder as a manifestation of widespread misogyny rather than a crime committed by a schizophrenic individual.

Between the lines

This paper addresses the urgent need for a shift in societal attitudes to effectively tackle tech-facilitated sex crimes. Legal reforms are essential but must be accompanied by changes in how we view and address misogyny. The rise in cases in South Korea—from 160 in 2021 to 297 in the first seven months of 2024—shows how deep-seated misogyny and gender-based violence underpin this crisis. This issue is global, and Korean feminists are calling for international awareness and action, urging the global community to highlight the recent deepfake scandal involving over 220,000 members in Telegram chatrooms. Without addressing the root cause of digital sex crimes, policy changes alone will be insufficient.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

    Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable

  • Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

    Humans are not Boltzmann Distributions: Challenges and Opportunities for Modelling Human Feedback an...

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

  • Mapping the Design Space of Human-AI Interaction in Text Summarization

    Mapping the Design Space of Human-AI Interaction in Text Summarization

  • Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

    Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

  • Fairness Definitions Explained (Research Summary)

    Fairness Definitions Explained (Research Summary)

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Research summary: Troubling Trends in Machine Learning Scholarship

    Research summary: Troubling Trends in Machine Learning Scholarship

  • Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

    Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

  • A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

    A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.