• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: The Colorado State Deepfakes Act

April 14, 2025

✍️ By Ogadinma Enwereazu.

Ogadinma is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.


The Colorado State Deepfakes Act

In 2024, the state of Colorado enacted the Candidate Election Deepfake Disclosures Act which primarily aims to address the increasing concern around AI-generated deepfakes in political campaigns.  

What are Deepfakes?

“Deepfakes” refer to synthetic or manipulated media such as images, videos, or audio, generated by artificial intelligence to falsely depict an individual saying or doing something they never actually said or did. 

Key Provisions of the Colorado Election Deepfake Disclosures Act

The Act prohibits the distribution of media content of candidates running for elected office with deepfakes that are undisclosed or insufficiently labelled, especially when done with knowledge and disregard for the content’s falsity. The Act clearly distinguishes deepfakes from AI-enhanced media by excluding minimally edited or adjusted media content.

To comply with this Act, any such communication must feature a clear and concise disclosure stating:

“This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful.”

The above disclaimer should also be included in the communication’s metadata and, where feasible, should be difficult to remove by future users 

Liability and Enforcement

For unpaid advertising violations, penalties start at $100 per violation; for paid advertising, at least 10% of the amount spent on the communication. Additionally, candidates depicted in undisclosed or improperly disclosed deepfakes can pursue civil action for injunctive relief or damages, including attorney fees and costs.  

Exemptions

The Act exempts certain entities from liability, including interactive computer services, broadcasting stations (radio, television, cable, satellite), internet websites, regularly published newspapers and providers of technology used in creating deepfakes, provided they comply with immunities granted by federal law.  

Limitations of the Act

Deepfakes can go viral in minutes, and by the time enforcement kicks in, the reputational damage is already done. Also, since deepfakes are often generated outside of the country, the legislation is unlikely to have a substantial practical effect. 

Most deepfake detection technologies are still catching up with the capabilities of generative AI. This raises questions about the effectiveness of enforcement if the content in question is not easily identifiable as fake. Attorney General Phil Weiser acknowledged this in a September 2024 statement, warning that even AI tools built to detect deepfakes often struggle to keep up.  

As of today, over 30 states across the U.S. have introduced or passed deepfake laws, reflecting a broad agreement on the serious risks they pose. However, the penalties have varying severity. States like New Jersey and Louisiana impose harsher penalties, including multi-year prison sentences and fines reaching $50,000. Some other states, like Delaware and California, rely on disclosure requirements or injunctive relief without imposing significant financial or criminal penalties. Colorado’s minimum $100 fine for undisclosed deepfake is relatively modest. 

As generative AI continues to evolve, states may face increased pressure to update their regulatory frameworks on deepfakes to safeguard electoral integrity and address broader ethical concerns about privacy and consent. 

Further Reading

  1. HB24-1147 Candidate Election Deepfake Disclosure Act 
  2. What are Deepfakes, and how are they created?  
  3. Regulating Election Deepfakes: a comparison of state laws 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • Responsible AI Licenses: social vehicles toward decentralized control of AI

    Responsible AI Licenses: social vehicles toward decentralized control of AI

  • The Paradox of AI Ethics in Warfare

    The Paradox of AI Ethics in Warfare

  • The irony of having a clean AI chatbot

    The irony of having a clean AI chatbot

  • Can LLMs Enhance the Conversational AI Experience?

    Can LLMs Enhance the Conversational AI Experience?

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.