• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Recess: Gender-Based Violence on Grok is a Feature, Not a Failure

May 11, 2026

A collage of a female office worker seated at a desk surrounded by stacks of paperwork while multiple firehoses spray streams of fiery liquid around her.

✍️By Natalie Jenkins from Encode Canada.

Natalie is an MSc candidate in Digital Policy at University College Dublin, based in Toronto, Ontario. She is interested in the impacts of emerging technologies on human rights and democracy. Natalie is a writer at Encode Canada and a journalist at Estonian Life newspaper in Toronto.


📌 Editor’s Note: This piece is part of our Recess series, featuring university students from Encode’s Canadian chapter at McGill University. The series aims to share insights from university students on current issues in AI ethics. In this article, Natalie Jenkins uses X’s Grok to examine the topic of “misogyny by design” and how AI tools can scale gender-based abuse through deliberate design choices.

Photo credit: Pauline Wee & DAIR / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/


In the early hours of a particularly cold January morning, Hannah awoke to her phone buzzing with X notifications. Still half-asleep, she opened the app and realized the attention wasn’t coming from one of her posts. A stranger had prompted Grok (X’s generative AI chatbot) to undress an old photo of her, which now had tens of thousands of impressions. With no ability to take it down, Hannah felt not only mortified but also afraid of what it might mean for her safety and how her peers, family, or employer would see her. 

This was the reality shared by millions of users in late December 2025 and early January 2026, when Grok came under fire for facilitating the rapid production and circulation of non-consensual sexualized deepfaked images of women and children. Law professor Clare McGlynn describes abuse cases like this as a consequence of “misogyny-by-design,” where safeguards and accountability mechanisms are applied only after public outrage, rather than built in from the start. This article uses the Grok incident as an example of the broader pattern of misogyny by design. AI systems are not neutral; they reflect their makers’ biases, enabling gender-based abuse to proliferate at unprecedented speed and scale. 

Timeline of abuse on Grok

In March 2025, X added an image-editing feature to Grok, allowing users to edit images through text-based prompts. In December 2025, the company embedded an “edit image” button on all public posts. This allowed users to generate an altered version of someone else’s image and post it as a reply, without consent from the original poster and without an opt-out option. Soon after, the platform saw a surge in users prompting Grok to undress women and girls, depict them in sexual positions, and portray them as victims of sexual violence. Between “Dec. 31 and Jan. 8, Grok generated more than 4.4 million images, of which at least 41 percent were sexualized images of women.” 

On January 9, Grok’s image generation was restricted to paid subscribers. However, this response was criticized since the standalone Grok app, the Grok tab on X, and web browsers still allowed image generation. On January 15, xAI announced that it would geoblock the creation of such content in jurisdictions where it is illegal, in addition to implementing limitations on generating images of real people. However, these guardrails “are easy to circumvent by simply using “less overt” prompts,” or by using a VPN. 

Misogyny by design 

Responsibility for this harm should not rest solely with the users prompting Grok, but with X’s design choices that made the production and dissemination of these images easy and widely accessible. McGlynn describes this pattern as “misogyny by design” to demonstrate how deliberate regulatory design choices enable gendered harm: “Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to,” she said in response to a deepfake porn video of Taylor Swift generated in 2025. Whereas other generative AI systems, including ChatGPT and Gemini, do not allow users to generate images resembling real people (albeit possible with some loopholes), “Elon Musk has positioned Grok as a more permissive chatbot with fewer rules governing its prompt generation than competing AI models.” The absence of these types of restrictions on X signals a clear disregard for women’s online safety and dignity.

Image-based abuse predates AI and sits within a broader continuum of violence against women. What is new are the tools that enable it. Generative AI repackages existing forms of abuse and amplifies them at unprecedented scale and speed, particularly when embedded in social media platforms with hundreds of millions of users. Grok’s impact in this case stemmed not only from its image-generation capability but from its integration into X, which has 561 million active users globally.

Another added layer of harm is that these images are circulated on a platform with economic incentives. Premium accounts with more than 2,000 active followers and five million impressions over three months are eligible for revenue sharing. Policies that incentivize the production of non-consensual sexualized images should not be seen in a vacuum. Rather, they are a consequence of the broader Western and cultural economic system under which women’s bodies have long been commodified and objectified. Seen in this light, it’s clear that the incident wasn’t a one-off failure; it was a predictable outcome of disseminating powerful and accessible AI systems without the necessary protections against foreseeable gendered abuse. 

Real-world harms 

The impact of this abuse is severe. While anyone can be targeted, women and girls are disproportionately affected. Women in public-facing roles, including journalists, politicians, and activists, face particular risk, especially in the Global South and in religious and conservative countries. 

Immediate consequences include fear, anxiety, and humiliation, with victims frequently reporting reputational damage, job loss, and isolation from their communities. This harm persists because removal is difficult as copies proliferate across different platforms. To avoid further abuse, many victims self-censor, leaving platforms and withdrawing from public life. Suggestions that users should simply “log off” ignore the reality that many services, work, and civic participation are now deeply intertwined with digital platforms. Therefore, non-consensual deepfakes function as a tool of intimidation to push women out of public spaces. 

The governance gap 

The incident triggered global condemnation, with investigations launched in the EU and the UK. If X is found to have violated obligations under the EU’s Digital Services Act or the UK’s Online Safety Act, the platform could face significant fines or blocking measures. Other states have already taken such action: Malaysia and Indonesia have temporarily suspended X until it implements adequate safeguards. Canada does not plan on blocking X but has been investigating the platform since February 2025 for compliance with Canada’s federal privacy law. 

Canada’s regulatory landscape 

Canada’s current regulatory landscape is not equipped to deal with the scale and nature of this abuse. National-level regulations that would have imposed platform accountability, including the Online Harms Act and the Artificial Intelligence and Data Act (AIDA), were shelved. 

Existing laws make it illegal to knowingly distribute an intimate image without consent. Proposed amendments in Bill C-16 would explicitly extend this prohibition to sexually explicit deepfakes. However, as of early 2026, the bill has not yet been implemented.

This means that victims must primarily rely on civil remedies, but these vary by province. All provinces except Ontario have some form of intimate image abuse protection legislation. Still, these laws place the burden of seeking redress, support, and accountability on victims. This can be financially costly and emotionally taxing as victims are forced to relive their experiences to law enforcement.    

Recommendations 

Support systems are important for victims of this abuse. Canada should establish rapid-response takedown pathways, accessible legal aid mechanisms, and dedicated helpline-style services for victims of image-based abuse. “For example, the Canadian Centre for Child Protection, a charity, operates the Cybertip.ca tip line specifically for reporting the online sexual exploitation of children.”  What’s more important are solutions that address the structural factors of this harm. Governments must mandate generative AI systems like Grok and the platforms that deploy them to treat safety as a guiding principle in the design of their products. For example, these may include mandatory gender impact assessments (GIA), safety audits conducted by independent researchers, and participatory design involving gender-based violence experts and victims’ perspectives prior to deployment. Until this can be done, abuse will persist on platforms like Grok not as a glitch or a failure, but as one of its most profitable features.


References

Alex Jones Network [@RealAlexJones]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/RealAlexJones

Ben Shapiro Show [@BenShapiroShow]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/BenShapiroShow 

Chow, A. R. (2025, January 7). Why Meta’s fact-checking change could lead to more misinformation on Facebook and Instagram. Time. https://www.time.com/7205332/meta-fact-checking-community-notes/ 

Cobratate [@Cobratate]. (n.d.). Posts [X profile]. X. Retrieved December 10, 2025, from https://x.com/Cobratate

Cranz, A., & Brandom, R. (2021, October 3). Facebook encourages hate speech for profit, says whistleblower. The Verge. https://www.theverge.com/2021/10/3/22707860/facebook-whistleblower-leaked-documents-files-regulation

Finlayson, A. (2022). YouTube and political ideologies: Technology, populism and rhetorical form. Political Studies, 70(1), 62-80. 

Haggart, B., & Tusikov, N. (2021, June 9). Is Canada ready for the platform regulation debate? Centre for International Governance Innovation. https://www.cigionline.org/articles/is-canada-ready-for-the-platform-regulation-debate/

Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New media & society, 24(5), 1258-1277.

Kang, H., & Lou, C. (2022). AI agency vs. human agency: understanding human–AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmac014.

Kleis Nielsen, R., & Ganter, S. A. (2018). Dealing with digital intermediaries: A case study of the relations between publishers and platforms. New media & society, 20(4), 1600-1617.

Lane, C. (2024, February 5). Social media algorithms amplify misogynistic content to teens. UCL News. https://www.ucl.ac.uk/news/2024/feb/social-media-algorithms-amplify-misogynistic-content-teens

Olsen, R. K., Solvoll, M. K., & Futsæter, K. A. (2022). Gatekeepers as safekeepers—Mapping audiences’ attitudes towards news media’s editorial oversight functions during the COVID-19 crisis. Journalism and Media, 3(1), 182-197. 

Pehlivan, Z., Zhu, J., Ross, C., Jiang, D., Park, S., Chan, E., Phillips, J., & Bridgman, A. (2025, November). Power shift: The rise of political influencers in Canada. Media Ecosystem Observatory. https://meo.ca/work/power-shift-the-rise-of-political-influencers-in-canada

Regulation (EU) 2022/2065. On a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act). European Parliament, Council of the European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32022R2065 

Statistics Canada. (2024, June 25). Scrolling through the social media stats. https://www.statcan.gc.ca/o1/en/plus/6540-scrolling-through-social-media-stats 

Sun, H. (2023). The right to know social media algorithms. Harvard Law & Policy Review, 18(1), https://ssrn.com/abstract=4944976 

Talaga, S., Wertz, E., Batorski, D., & Wojcieszak, M. (2025). Changes to the Facebook Algorithm Decreased News Visibility Between 2021-2024. arXiv preprint arXiv:2507.19373. Wilding, D., Fray, P., Molitorisz, S., & McKewon, E. (2018). The impact of digital platforms on news and journalistic content. Digital Platforms Inquiry., A. (2025, April 24). The Apple Watch just turned 10. Here’s how far  it’s come. Wired.https://www.wired.com/story/apple-watch-turns-10/

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

An abstract spiral of dark circles appears at the centre, resembling a tornado. Several vintage magazine covers and advertisements are being drawn toward the spiral. The artworks that have already been pulled into it are becoming distorted and replaced with clusters of numbers representing their numerical embeddings.

Tech Futures: Better Imagination for Better Tech Futures

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

related posts

  • The Role of Arts in Shaping AI Ethics

    The Role of Arts in Shaping AI Ethics

  • Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

    Benchmark Dataset Dynamics, Bias and Privacy Challenges in Voice Biometrics Research

  • Enhancing Trust in AI Through Industry Self-Governance

    Enhancing Trust in AI Through Industry Self-Governance

  • Beyond the Frontier: Fairness Without Accuracy Loss

    Beyond the Frontier: Fairness Without Accuracy Loss

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • Report on Publications Norms for Responsible AI

    Report on Publications Norms for Responsible AI

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Research summary: Roles for Computing in Social Change

    Research summary: Roles for Computing in Social Change

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.