• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: The Texas Responsible AI Governance Act

May 26, 2025

✍️ By Tasneem Ahmed.

Tasneem is an Undergraduate Student in Political Science and a Research Assistant at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.

This piece spotlights the 2024 Texas Responsible AI Governance Act (TRAIGA), focusing on Texas’s comprehensive AI bills and the changes made to its ethical and governance strategies over the past year.


The Texas Responsible AI Governance Act

After Colorado passed the Colorado AI Act, multiple other states, including California, New Mexico, and Texas, followed suit and created their own comprehensive AI safety bills. 

Last December, Texas Representative Giovanni Capriglione introduced the Texas Responsible AI Governance Act (TRAIGA). The act focuses on mitigating the effects of high-risk AI systems through the use of a regulatory framework and the development of new programs.

Risk factors, harms, governance strategies, and incentives for compliance

Risk factors and harms:

  • TRAIGA utilizes multiple sections of the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile when addressing the risks and harms of high-risk AI systems and how to address them. The act mainly focuses on the risks concerning algorithmic discrimination by ensuring that deployers take actions to eliminate the possibility of discrimination. However, the act also mentions other AI risks such as transparency, reliability, security, and privacy, highlighting the possibility of violations of civil or human rights. 

Governance strategies:

  • TRAIGA requires deployers to disclose information about their systems to consumers and to also perform impact assessments on their systems and disclose those results. The act also creates a new government institution called the Texas Artificial Intelligence Council, which will enforce all sections of the act and support deployers and new programs that are meant to improve AI systems within Texas. These programs include a sandbox program designed to improve innovation within the public and private sectors and a workforce grant program to teach workers how to utilize AI systems.

Incentives for compliance:

  • The act notes that the attorney general will penalize violators with fines for failure to disclose algorithmic discrimination incidents, disclose specific information about the system to consumers, and conduct impact assessments of their AI systems.

Changes after the new administration

On March 14th, a new version, TRAIGA 2.0, was filed in order to reflect changes within the new administration and create a more balanced bill that was likely to be passed. While the original bill was similar to the comprehensive AI bills seen within other states, there were multiple notable changes made in the shorter version of the bill that made TRAIGA 2.0 different from other comprehensive bills.

  • The original bill focused on the impacts of high-risk AI systems; however,  the new version does not include any risk-tiering of AI systems based on their impact.
  • While the original version of the legislation applied to everyone within the private and public sectors, TRAIGA 2.0 mainly focuses on AI usage by the government.
  • Unlike the original bill, which focused on preventing harms of AI systems regardless of intent; most sections that intend to protect consumers in TRAIGA 2.0 only apply if the AI system was intentionally designed to harm or manipulate their consumers, meaning that the deployer must have created the system with the purpose of causing harm.
  • Multiple sections of the original bill ensured disclosure to consumers about specific information of the AI system, including its purpose, the contact information of deployers, and consumers’ rights.  On the other hand, TRAIGA 2.0 does not require disclosure of information to consumers; although it allows consumers to appeal decisions made by AI systems.
  • Most of the strategies from the original bill that were intended to mitigate the effects of high-risk AI systems, such as impact assessments and risk management policies, are no longer  required for non-governmental deployers of AI systems.

Further Reading

  1. A Deep Dive into Colorado’s Artificial Intelligence Act
  2. Congress Should Preempt Onslaught of State AI Laws
  3. With AI on the rise, Texas House passes bill requiring more transparency in political ads

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • What is Sovereign Artificial Intelligence?

    What is Sovereign Artificial Intelligence?

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

  • Who Is Governing AI Matters Just as Much as How It's Designed

    Who Is Governing AI Matters Just as Much as How It's Designed

  • AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

    AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

  • AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum...

    AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum...

  • Honouring Abhishek Gupta at the Montreal Startup Community Awards 2024

    Honouring Abhishek Gupta at the Montreal Startup Community Awards 2024

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

  • Agentic AI systems and algorithmic accountability: a new era of e-commerce

    Agentic AI systems and algorithmic accountability: a new era of e-commerce

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.