• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: The Colorado State Deepfakes Act

April 14, 2025

✍️ By Ogadinma Enwereazu.

Ogadinma is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.


The Colorado State Deepfakes Act

In 2024, the state of Colorado enacted the Candidate Election Deepfake Disclosures Act which primarily aims to address the increasing concern around AI-generated deepfakes in political campaigns.  

What are Deepfakes?

“Deepfakes” refer to synthetic or manipulated media such as images, videos, or audio, generated by artificial intelligence to falsely depict an individual saying or doing something they never actually said or did. 

Key Provisions of the Colorado Election Deepfake Disclosures Act

The Act prohibits the distribution of media content of candidates running for elected office with deepfakes that are undisclosed or insufficiently labelled, especially when done with knowledge and disregard for the content’s falsity. The Act clearly distinguishes deepfakes from AI-enhanced media by excluding minimally edited or adjusted media content.

To comply with this Act, any such communication must feature a clear and concise disclosure stating:

“This (image/audio/video/multimedia) has been edited and depicts speech or conduct that falsely appears to be authentic or truthful.”

The above disclaimer should also be included in the communication’s metadata and, where feasible, should be difficult to remove by future users 

Liability and Enforcement

For unpaid advertising violations, penalties start at $100 per violation; for paid advertising, at least 10% of the amount spent on the communication. Additionally, candidates depicted in undisclosed or improperly disclosed deepfakes can pursue civil action for injunctive relief or damages, including attorney fees and costs.  

Exemptions

The Act exempts certain entities from liability, including interactive computer services, broadcasting stations (radio, television, cable, satellite), internet websites, regularly published newspapers and providers of technology used in creating deepfakes, provided they comply with immunities granted by federal law.  

Limitations of the Act

Deepfakes can go viral in minutes, and by the time enforcement kicks in, the reputational damage is already done. Also, since deepfakes are often generated outside of the country, the legislation is unlikely to have a substantial practical effect. 

Most deepfake detection technologies are still catching up with the capabilities of generative AI. This raises questions about the effectiveness of enforcement if the content in question is not easily identifiable as fake. Attorney General Phil Weiser acknowledged this in a September 2024 statement, warning that even AI tools built to detect deepfakes often struggle to keep up.  

As of today, over 30 states across the U.S. have introduced or passed deepfake laws, reflecting a broad agreement on the serious risks they pose. However, the penalties have varying severity. States like New Jersey and Louisiana impose harsher penalties, including multi-year prison sentences and fines reaching $50,000. Some other states, like Delaware and California, rely on disclosure requirements or injunctive relief without imposing significant financial or criminal penalties. Colorado’s minimum $100 fine for undisclosed deepfake is relatively modest. 

As generative AI continues to evolve, states may face increased pressure to update their regulatory frameworks on deepfakes to safeguard electoral integrity and address broader ethical concerns about privacy and consent. 

Further Reading

  1. HB24-1147 Candidate Election Deepfake Disclosure Act 
  2. What are Deepfakes, and how are they created?  
  3. Regulating Election Deepfakes: a comparison of state laws 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

    Cascaded Debiasing : Studying the Cumulative Effect of Multiple Fairness-Enhancing Interventions

  • AI Policy Corner: AI for Good Summit 2025

    AI Policy Corner: AI for Good Summit 2025

  • Who Is Governing AI Matters Just as Much as How It's Designed

    Who Is Governing AI Matters Just as Much as How It's Designed

  • Europe : Analysis of the Proposal for an AI Regulation

    Europe : Analysis of the Proposal for an AI Regulation

  • An Algorithmic Introduction to Savings Circles

    An Algorithmic Introduction to Savings Circles

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

    Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation

    AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.