

✍️ By Ogadinma Enwereazu.
Ogadinma is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.
In 2024, the state of Colorado enacted the Candidate Election Deepfake Disclosures Act which primarily aims to address the increasing concern around AI-generated deepfakes in political campaigns.
What are Deepfakes?
“Deepfakes” refer to synthetic or manipulated media such as images, videos, or audio, generated by artificial intelligence to falsely depict an individual saying or doing something they never actually said or did.
Key Provisions of the Colorado Election Deepfake Disclosures Act
The Act prohibits the distribution of media content of candidates running for elected office with deepfakes that are undisclosed or insufficiently labelled, especially when done with knowledge and disregard for the content’s falsity. The Act clearly distinguishes deepfakes from AI-enhanced media by excluding minimally edited or adjusted media content.
To comply with this Act, any such communication must feature a clear and concise disclosure stating:
“This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful.”
The above disclaimer should also be included in the communication’s metadata and, where feasible, should be difficult to remove by future users
Liability and Enforcement
For unpaid advertising violations, penalties start at $100 per violation; for paid advertising, at least 10% of the amount spent on the communication. Additionally, candidates depicted in undisclosed or improperly disclosed deepfakes can pursue civil action for injunctive relief or damages, including attorney fees and costs.
Exemptions
The Act exempts certain entities from liability, including interactive computer services, broadcasting stations (radio, television, cable, satellite), internet websites, regularly published newspapers and providers of technology used in creating deepfakes, provided they comply with immunities granted by federal law.
Limitations of the Act
Deepfakes can go viral in minutes, and by the time enforcement kicks in, the reputational damage is already done. Also, since deepfakes are often generated outside of the country, the legislation is unlikely to have a substantial practical effect.
Most deepfake detection technologies are still catching up with the capabilities of generative AI. This raises questions about the effectiveness of enforcement if the content in question is not easily identifiable as fake. Attorney General Phil Weiser acknowledged this in a September 2024 statement, warning that even AI tools built to detect deepfakes often struggle to keep up.
As of today, over 30 states across the U.S. have introduced or passed deepfake laws, reflecting a broad agreement on the serious risks they pose. However, the penalties have varying severity. States like New Jersey and Louisiana impose harsher penalties, including multi-year prison sentences and fines reaching $50,000. Some other states, like Delaware and California, rely on disclosure requirements or injunctive relief without imposing significant financial or criminal penalties. Colorado’s minimum $100 fine for undisclosed deepfake is relatively modest.
As generative AI continues to evolve, states may face increased pressure to update their regulatory frameworks on deepfakes to safeguard electoral integrity and address broader ethical concerns about privacy and consent.