• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Digital Sex Crime, Online Misogyny, and Digital Feminism in South Korea

January 13, 2025

🔬 Research Summary by Giuliana Luz Grabina, a McGill University philosophy alumna, with an interest in AI/technology policy regulation from a gendered perspective.

[Original paper by Minyoung Moon]


Overview: South Korea has a dark history of digital sex crimes. South Korean police reported a significant rise in online deepfake sex crimes, with 297 cases documented in the first seven months of 2024. This marks a sharp increase from 180 cases reported throughout 2023 and 160 in 2021. This paper draws on the development and diversification of gender-based violence, aided by evolving digital technologies, in the Korean context. Additionally, it explores how Korean women have responded to pervasive issues of digital sex crimes and online misogyny with the support of an increasing population of digital feminists.


Introduction

In August 2024, investigators uncovered multiple Telegram chatrooms, totalling over 220,000 members, that featured non-consensual sexually explicit deepfakes of female university students, as well as high school and middle school students. Users, mainly teenage students, would upload photos of people they knew—both classmates and teachers—and other users would then turn them into sexually explicit deepfake images. 

In an emergency meeting, President Yoon called for a swift crackdown on these Telegram chatrooms. Yet, many remained skeptical of President Yoon’s ability and commitment to adequately address South Korea’s growing deepfake porn epidemic. 

In this article, Minyoung Moon examines how South Korea’s rampant misogyny problem fuels gender-based digital sex crimes, making it the cyber sex crime capital.

Tech-Facilitated Sex Crimes: From Motel Rooms to the Nth Room

Prior to the internet, technology-facilitated sex crimes primarily revolved around the secret filming of sexual encounters in motel rooms and the offline distribution of such videos. Since the advent of the internet, sexual crimes have increased dramatically. For instance, in 2005, digital sex crimes accounted for only 3.6 percent of the overall sex crimes, but by 2015, that percentage had increased up to 24.9 percent.

In 2020, the “Nth Room incident,” which involved the sexual exploitation of women—including sixteen minors—and saw several dozen women lured through fake job advertisements to participate in the creation of sexually exploitative material, sparked national attention. These videos and pictures, Moon argues, were then sold to customers using the Telegram messenger app. Shockingly, it was reported that approximately 60,000 individuals were engaged in the production, distribution, or possession of the content.

South Korea’s Misogyny Problem

In a 2018 survey conducted on women’s experience of online misogyny, a staggering 97 percent of the survey respondents reported being exposed to misogynistic content online.  Moon argues that this alarming statistic underscores the deeply entrenched nature of online misogyny in Korean society, particularly within the notorious online community Ilbe, which has been widely condemned for perpetuating hatred toward Korean women. Founded in 2010, Ilbe—short for “Ilgan Best,” meaning “daily best” in English—is a platform that aggregates and shares posts deleted or censored from DC Inside, one of Korea’s largest online community websites. 

Like Reddit, DC Inside allows anonymous users to exchange information and participate in themed galleries, which function like subreddits. This environment fosters a variety of digital subcultures, predominantly male-dominated, where memes and internet slang often originate. Ilbe users perpetuate misogynistic discourses by portraying young Korean women as self-centered and demanding excessive rights. Ilbe members also attribute many social problems, even those remotely related to gender, to the perceived selfishness of Korean women. Despite being widely regarded as an extreme online hate group, Moon argues that Ilbe’s substantial user base—peaking at up to 40,000 daily users in 2015—underscores its significant social influence. This antagonistic view toward women has become increasingly widespread, particularly among Korean youth. A 2021 survey focusing on this demographic revealed that women were the primary targets of hate speech.

Online misogyny transcends digital harassment and manifest as gender-based violence in offline settings. The 2016 murder case at Gangnam Station illustrates this connection: a man brutally stabbed a female stranger in a public restroom, admitting he targeted her because he felt ignored by women, thereby underscoring misogyny as his motive. This incident triggered an unprecedented rally of Korean women near Gangnam Station, symbolizing their deep-seated grievances against pervasive online misogyny.

Digital Activism

In response, Korean feminists established Megalia, a platform created by digital feminists well-versed in online subcultures and determined to combat online misogyny. Megalia adopted a playful and sarcastic language style that mirrored misogynistic rhetoric, specifically targeting men as the object of ridicule. Despite facing criticism for its misandrist language, Moon argues that Megalia played a crucial role in leading campaigns against technology-facilitated gender-based violence in Korea.

A notable achievement was shutting down Soranet in 2016, Korea’s largest illegal pornographic website. Soranet served as a popular platform for sharing and viewing illegally filmed videos of women. Megalia condemned Soranet’s criminality, which led to collaboration with a feminist politician. This collaboration initiated a police investigation and resulted in the arrest of the platform’s administrators. Additionally, Megalia helped reframe the 2016 Gangnam Station murder as a manifestation of widespread misogyny rather than a crime committed by a schizophrenic individual.

Between the lines

This paper addresses the urgent need for a shift in societal attitudes to effectively tackle tech-facilitated sex crimes. Legal reforms are essential but must be accompanied by changes in how we view and address misogyny. The rise in cases in South Korea—from 160 in 2021 to 297 in the first seven months of 2024—shows how deep-seated misogyny and gender-based violence underpin this crisis. This issue is global, and Korean feminists are calling for international awareness and action, urging the global community to highlight the recent deepfake scandal involving over 220,000 members in Telegram chatrooms. Without addressing the root cause of digital sex crimes, policy changes alone will be insufficient.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

    Seeing Like a Toolkit: How Toolkits Envision the Work of AI Ethics

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

    Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

  • Aging in an Era of Fake News (Research Summary)

    Aging in an Era of Fake News (Research Summary)

  • The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

    The Impact of Recommendation Systems on Opinion Dynamics: Microscopic versus Macroscopic Effects

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.