
🔬 Original article by Alessandra Destison from Encode Canada.
📌 Editor’s Note: This is part of our Recess series, featuring university students from across Canada exploring ethical challenges in AI. Written as part of Encode Canada’s Policy Fellowship, an annual policy-oriented advocacy program where five students across Canada tackle specific critical issues in AI governance. These pieces aim not only to spark discussions on AI literacy and ethics but also to propose tangible and concrete policy recommendations.
I. Executive Summary
The rise of nonconsensual pornographic deepfakes poses a significant threat to the privacy and dignity of Canadians. Deepfake technology enables the creation of realistic, synthetic sexual images or videos without the depicted individuals’ consent, disproportionately targeting women.
Despite the severity of the issue, Canada’s legal framework lacks clear provisions addressing nonconsensual synthetic intimate content. The false nature of deepfake images complicates legal proceedings, leaving victims uncertain about their rights. A comprehensive legislative response is crucial to closing legal loopholes, protecting victims, and holding both individuals and platforms accountable.
To safeguard Canadians’ rights, policymakers must consider expanding the definition in Section 162.1(2), reforming civil remedies, and ensuring platform accountability.
II. Background
- Deepfake pornography or intimate images involve creating realistic yet false depictions of sexual or nude individuals. Nonconsensual pornographic deepfakes constitute a form of sexual exploitation in which perpetrators generate synthetic images or videos of victims appearing nude or engaging in sexual acts without their consent. This content is often produced using face-swapping technology, which “swaps” a person’s face onto an existing or synthetic adult video or image (Hao, 2021).
- Today, nonconsensual intimate images can be generated using nudifying websites or readily available apps, which can transform a single, fully clothed picture, publicly found online, into realistic pornographic content. The scale of this problem is alarming:
- A recent study found that 96% of all accessible deepfake content is pornographic, and 99% of it targets women (Kan, 2023). The vast majority of these deepfakes were created without the depicted person’s consent (Dunn, 2024).
- The technology is growing in popularity. In 2023, a study by Graphika identified over 24 million unique views on 34 popular nonconsensual pornographic deepfake provider websites (Graphika, 2023).
- This threat is a growing concern. In 2023, 1 in 10 adults reported either being a victim of deepfake porn or knowing someone who was (ESET, 2023). As nonconsensual pornographic deepfakes become increasingly prevalent, the need for clear legal remedies and recourse for victims is more urgent than ever to protect their dignity and rights.
- Easily accessible website platforms such as Deepnude market their service as an opportunity to create nude images of anyone. They state: “Undress anyone. It doesn’t matter who exactly you want to undress anyone online, our artificial intelligence website will bring your fantasy to life in a matter of minutes” (Deepnude, 2025). Although the FAQ suggests “avoiding uploading a person’s image without their consent”, this information is not readily accessible, and the vocabulary is not of an authoritative nature.
- The platform also offers a referral program, which encourages the propagation and popularisation of the technology in exchange for monetary compensation (which takes the form of credit on the platform to access paid services).
- In 2024, a 16-year-old girl living in Toronto received a message online of a nude topless image of herself. This was a deepfake image, generated from a picture of her when she was 13 years old. (Mauro & Sheldon, 2024).
- Because the nudifying technology only requires a single image of a person, those with anyone with an online presence are at risk of having their image be non consensually uploaded to the nudifying websites. This means that younger generations are disproportionately affected by this issue due to their extensive use of social media. Whether accounts are private or public, the mere presence of their photos online puts them at risk.
- As the threat of nonconsensual pornographic deepfakes becomes more alarming, there is an increasing need for a clear remedy and recourse for victims of the creation and publication of nonconsensual pornographic deepfakes in order to maintain the victim’s right to dignity.
III. Current Policy
The only clear criminal recourse for victims is Section 162.1 of the Criminal Code (publication of an intimate image without consent) (Dunn, 2024). However, under Section 162.1(2), “intimate images” are defined as “recordings of a person” (R.S., 1985, c. C-46, s. 162). This wording suggests that the synthetic nature of deepfakes is not explicitly covered under the law, as the provision appears to apply only to “authentic images” (Diab, 2025). Consequently, under a strict reading, deepfake content may not be included (Diab, 2025).
Alternatively, victims may seek recourse through civil law by pursuing claims such as defamation, violation of privacy, or intentional infliction of mental suffering (Dunn 2024). However, the false nature of deepfakes creates a legal loophole, even in Civil cases, that can be exploited by the defence. For instance, in privacy violation claims, one could argue that no actual privacy breach occurred because the image is fabricated and does not intrude on the victim’s real private life.
At the provincial level, eight provinces have enacted legislation targeting the unauthorized distribution of intimate images, with varying degrees of vocabulary that successfully encapsulate the risk posed by deepfake technology. These provinces are: Nova Scotia, Manitoba, Prince Edward Island, New Brunswick, Newfoundland and Labrador, Alberta, Saskatchewan and British Columbia.
IV. Legal Considerations: The Right to Freedom of Speech
While the regulation of the creation or publication of non-consensual pornographic deepfakes may create concerns for the right to the freedom of expression as protected under Section 2 (b) of the Canadian Charter, policy can be defended if it is justifiable under Section 1. To do so, policy must specifically target the clear and grave harms non-consensual deepfake pornography poses for Canadian society, and avoid covering other types of speech, such as satire, which may be viewed as legislative overbreadth.
Canadian courts have recognized that freedom of expression is not absolute, especially where expression inflicts measurable harm to our community’s most vulnerable groups (such as youth and women). Grounding any restrictions in demonstrable harms and applying the least rights-infringing means strengthens both the legal defensibility and public legitimacy of the policy. Therefore, a specific and targeted approach to this problem is key.
V. Case Study of Policy Solutions
Case study 1: British Columbia – Intimate Images Protection Act
Background: The Intimate Images Protection Act creates an expedited process for a victim of the non-consensual distribution of intimate images of a person. Under this Act, the law applies to visual simultaneous representation of an individual, in addition to covering whether or not the image has been altered in any way.
With these two specifications, the definitional framework of the legislation ensures that deepfake or AI-modified images are included in this act. The act also ensures a mechanism to easily request the removal of these images from big tech actors such as Google. Furthermore, under this act, claimants are eligible to receive punitive, compensatory, and aggravated damages.
The Act also establishes the Intimate Images Protection Services, which provide services to victims.
Why it works:
- The language used ensures the inclusion of synthetic images under the legislation.
- The Act permits an expedited and simplified approach, which ensures accountability and also permits the victim of these crimes to seek civil remedies.
- The Intimate Images Protection Services ensures that help is easily accessible and that resources for victims are equitable. Additionally, these resources are made more accessible by permitting complaints to be made by those 14 and over without parental notification, which eliminates a hurdle younger adults face in seeking help.
- The Act also facilitates the removal of these images from the internet, which is often difficult and tedious.
Challenges:
- The Intimate Images Protection Act offers remedies after the harm has occurred, such as take-down orders, penalties, and monetary compensation. However, it lacks a proactive legal mechanism to prevent the creation or distribution of non-consensual deepfake pornography before it reaches the public. Although the images can be taken down and civil remedies can be awarded, important and irreversible damage can already be done to the victim.
- The act lacks a mechanism that offers upstream deterrence and few mechanisms to stop deepfake content before it spreads. Therefore, it should be paired with legislation that works towards the deterrence of non-consensual deepfake pornography.
Case Study 2: United States – TAKE IT DOWN Act
Background: The TAKE IT DOWN Act introduces criminal prohibitions on the knowing publication of non-consensual intimate images. Importantly, it explicitly includes digital forgeries, which are defined as an intimate visual depiction of an identifiable individual created or altered using AI or other technological means (Killion, 2025).
This definition directly includes deepfake or synthetic images altered with AI, ultimately closing the legal loophole of image authenticity. Furthermore, the act also introduces the requirement for covered platforms to create notice and removal systems by May of 2026, wherein victims can request that content and all of its copies be taken down.
Why it works:
- Legislation explicitly includes deepfakes and other AI-related synthetic images
- Take-down mechanisms allow for all copycat or reposts of content to also be eliminated from the internet in a simple and timely manner, which reduces trauma and hurdles for the victim.
- Platforms’ requirement to take down content is not enforced by civil suit but rather by the FTC, wherein the failure to comply with the prescribed removal process or timeline is treated as an “unfair or deceptive act or practice” under the FTC Act. Enforcement then does not depend on a victim’s access to legal recourse, which is oftentimes not available to all.
- The act may also be used to bolster certain preexisting civil claims and statutes, such as claims made under the Violence Against Women Act (VAWA) or 15 U.S.C. § 6851 (Rosen, 2025).
Challenges:
- Issues with false or bad actor reports: There is no requirement to certify a takedown request under penalty of perjury, nor any legal consequence for impersonating someone or falsely claiming to act on their behalf (Goldman, 2025). This dangerously removes hurdles for bad actors to falsely report content or impersonate others in order to have content they disagree with removed. Since the burden to investigate the veracity of the claim falls on the platform (who only have 48 hours to verify the claim and remove the content before facing penalties), there is an important likelihood of abuse of the take-down mechanism (Goldman, 2025). This raises important free speech and censorship concerns.
- The act grants immunity to platforms that remove content in good faith, meaning that little to no consequences befall the platform for removing content as a result of false requests. As a result, there is more incentive to incorrectly remove content than not to do so in cases where the authenticity of the request is difficult to determine (Goldman, 2025).
VI. Potential Policy Solutions
Civil Law Amendment:
Establish clear civil causes of action for victims of nonconsensual deepfake pornography, allowing them to seek damages and injunctions. Civil remedies benefit victims because they require a lower burden of proof (51% probability, rather than 90% in criminal cases) and provide an avenue for seeking compensatory damages. A comprehensive amendment should:
- Modify existing statutes on privacy violations, defamation, and nonconsensual intimate images to explicitly include synthetic media that falsely depict individuals in sexual or nude contexts.
- Enable individuals to sue perpetrators for emotional distress, reputational harm, and financial damages caused by the dissemination of nonconsensual deepfake pornography. Punitive and compensatory damages should be substantial enough to outweigh any profit made from deepfake creation, deterring future violations.
- Allow courts to issue orders mandating online platforms and content hosts to swiftly remove nonconsensual deepfake pornography and prevent further distribution.
- Establish liability for those who create, distribute, or fail to remove deepfake content upon notice, ensuring victims can take action against both individuals and platforms that profit from such content.
Challenges: Civil law remedies remain inadequate for protecting victims’ rights due to several obstacles:
- The burden of proof rests on the victim, making it difficult to track perpetrators given a lack of resources.
- Legal proceedings can be costly, potentially discouraging victims from pursuing justice.
Criminal Law Amendment:
Expand Section 162.1 to explicitly criminalize the creation and distribution of nonconsensual deepfake pornography. Criminal law provides a stronger deterrent effect than civil law, offers faster and more accessible justice, and removes ambiguity in current laws. A criminal amendment should:
- Broaden the definition of intimate images under Section 162.1(2). This could follow British Columbia’s recent provincial tort amendment, which defines an intimate image as a “visual… representation of an individual, whether or not the individual is identifiable and whether or not the image has been altered in any way” (Intimate Images Protection Act, SBC 2023, c C-11, s1).
Challenges: This approach risks overreach, enforcement difficulties, and potential conflicts with freedom of expression rights.
Self-Regulatory Organization (SIRO):
Establish a self-regulatory organization to set standards and enforce compliance among online platforms. For instance, platforms could be required to detect and remove nonconsensual deepfake content promptly, with penalties for noncompliance.
Challenges: This approach relies on private companies’ cooperation, which may prioritize profit over victim protection. Additionally, SIROs typically lack the authority to impose significant legal consequences.
VII. Recommendations
To effectively combat the threat of nonconsensual pornographic deepfakes, a multi-faceted approach combining legislative reform and regulatory oversight is essential. The following recommendations provide a balanced strategy:
- Criminal Law Reform
- Amend Section 162.1(2) of the Criminal Code to explicitly include synthetic media under the definition of “intimate images.”
- Criminalize the use of nudifying technology on images of non-consenting parties.
- Mandate law enforcement agencies to develop specialized techniques for investigating and prosecuting deepfake-related offences.
- Civil Law Enhancement
- Amend privacy and defamation statutes to explicitly recognize deepfake content as a violation of individual rights.
- Establish a streamlined process for victims to obtain court orders requiring the removal of nonconsensual deepfake content.
- Impose punitive damages exceeding the profits generated by deepfake creators and distributors to deter future violations.
- Online Platform Accountability
- Introduce legal liability for platforms that fail to act swiftly upon notice of deepfake violations. However, ensure that platforms act responsibly in investigating whether the deepfake violation is authentic or not. Platforms should be given an appropriate amount of time to investigate these claims (more than 48 hours) to avoid the immediate removal of content that is maliciously reported (content that may not be non-consensual pornographic deepfakes, but is reported as such to remove content that does not align with personal values, such as LGBTQIA+ content).
- Platforms should not be granted complete immunity based on good-faith removal of content, but rather should be required to demonstrate that all content removal was the result of a robust investigation of the veracity of the request.
- All requests should be associated with an appropriate level of accountability to create an additional hurdle to false bad-faith requests. For example, requestors should be liable if they are impersonating another person in their request or requesting content that does not actually depict them.
- Creation of a Self-Regulatory Organization (SIRO)
- Establish a regulatory body to set industry standards and monitor compliance with anti-deepfake measures.
- Ensure that platforms maintain accountability in responsibly investigating and removing content within a reasonable delay. With a specific focus on ensuring that all reports are authentic and not malicious, as well as thoroughly reviewing and handling all reports with the dignity of the victim in mind.
- Securing backstops against the malicious exploitation of these regulations by groups that desire to see the unconstitutional censure of legal content online (example: consensual LGBTQIA+ content).
- Implement financial penalties and restrictions on companies that fail to adhere to ethical and legal guidelines.
References
- Criminal Code, RSC 1985, c C-46, s 162
- Deepnude (2025).https://ai-deep-nude.com/?referal=sJrlqBYqu85D
- Diab, R. (2025, January 11). Are sexual deepfakes not a crime in Canada? Robert Diab. https://www.robertdiab.ca/posts/deepfakes/
- Dunn, S. (2024). Legal Definitions of Intimate Images in the Age of Sexual Deepfakes and Generative AI. McGill Law Journal, 69(4), 395–416. https://doi.org/10.26443/law.v69i4.1626
- Ellen Mauro & Mia Sheldon. (2024, November 18). She was careful online, but this Toronto teen was still targeted with deepfake porn. CBC News. Https://www.cbc.ca/news/canada/deepfake-minors-porn-explicit-images-1.7385099.
- ESET, (2024, March 20). Nearly Two-Thirds of Women Worry About Being a Victim of Deepfake Pornography, ESET UK Research Reveals. https://www.eset.com/uk/about/newsroom/press-releases/nearly-two-thirds-of-women-worry-about-being-a-victim-o f-deepfake-pornography-eset-uk-research-reveals/.
- Hao, K. (2021, September 13). A horrifying new AI app swaps women into porn videos with a click. MIT Technology Review. https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/
- Intimate Images Protection Act, SBC 2023, c C-11, s1
- Goldman, E. (2025, June 5). A Takedown of the Take It Down Act. Technology & Marketing Law Blog. https://blog.ericgoldman.org/archives/2025/06/a-takedown-of-the-take-it-down-act.htm
- Kan, M. “The Internet Is Full of Deepfakes, and Most of Them Are Porn.” PCMag, September 13, 2023.
- Killion, Victoria L. (2025, May 20).The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images. (CRS Report No. LSB11314). https://www.congress.gov/crs-product/LSB11314
- Lakatos, S. (2023). A Revealing Picture (Graphika). Graphika. https://22006778.fs1.hubspotusercontent-na1.net/hubfs/22006778/graphika-report-a-revealing-picture.pdf
- Rosen, E. (2025, June 15). Navigating the TAKE IT DOWN Act in Litigation. Dynamis LLP. https://www.dynamisllp.com/knowledge/navigating-take-it-down-act-in-litigation