• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

July 7, 2025

✍️ By Ogadinma Enwereazu.

Ogadinma is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights the Texas Responsible Artificial Intelligence Governance Act, focusing on the final updates and comparing it to New York’s AI Act.


Texas and New York: Comparing U.S. State-Level AI Laws

The years 2024 and 2025 have seen a significant increase in state-level AI policy. In May, we wrote an article on the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which at the time had not yet been signed into law. This current article compares the signed TRAIGA, enacted on June 22, 2025, with the New York Artificial Intelligence Act (S1169A).

What makes the TRAIGA stand out is both its timing and scope. At a time when U.S. Congress continues to debate national AI legislation like the One Big Beautiful Bill, with efforts to block states from passing their own AI laws, Texas has chosen to act independently.

The TRAIGA establishes ethical and transparent procurement guidelines for agencies purchasing AI systems, bans state agents or government contractors from intentionally using AI systems in a manner that discriminates against individuals based on protected characteristics, and prohibits AI-driven social scoring. The law also creates a regulatory sandbox to encourage AI innovation by allowing companies to test AI tools under reduced oversight. The fines for non-compliance range from $10,000 to $12,000 for curable violations, and $80,000 to $200,000 for uncurable violations.

The final TRAIGA has been criticized for its narrow scope, as it does not regulate private sector AI deployment and fails to offer individual consumers ample rights to sue for AI-related harms. It remains uncertain whether major tech companies and private employers in Texas will continue deploying unregulated AI on civilians. Additionally, proving discrimination would require evidence of intent, making it challenging to hold agencies or vendors accountable for ‘unintended’ AI biases, especially in high-risk areas such as predictive policing or benefit eligibility systems.

In contrast, New York’s AI Act (S1169A) takes a consumer protection and civil rights approach. The bill covers both developers and deployers of high-risk AI systems that impact critical decisions, such as hiring, lending, education, housing, and healthcare. New York mandates third-party AI audits for bias and algorithmic discrimination. Deployers must provide prior notice to individuals when AI will be used in decision-making, and individuals have the right to opt out or appeal any AI-generated decision. Importantly, New York’s AI Act offers a private right of action, allowing individuals to sue for damages if they suffer harm due to non-compliant AI systems.

Another key difference lies in each state’s attitude toward AI impact assessment. On the one hand, the TRAIGA lacks a strong audit requirement, as it focuses more on incident reporting after harm has occurred. On the other hand, the New York AI Act requires developers and deployers to maintain Risk Management Programs aligned with the NIST AI Risk Management Framework, reflecting a more precautionary philosophy. The New York AI Act also includes whistleblower protections, ensuring that ethical concerns raised internally do not result in retaliation.

In sum, Texas appears to favour innovation and government self-regulation, while New York lays more emphasis on individual remedies and corporate accountability.

Further Reading

  1. The One Big Beautiful Bill Could Ban States from Regulating AI (May 27, 2025)
  2. Texas signs the Responsible AI Governance Act into Law (June 23, 2025)
  3. Trump’s allies wanted to strip states’ powers on AI. It backfired. (July 2, 2025)

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

related posts

  • In Memoriam: Abhishek Gupta (Dec 20, 1992 – Sep 30, 2024)

    In Memoriam: Abhishek Gupta (Dec 20, 1992 – Sep 30, 2024)

  • The Ethical AI Startup Ecosystem 02: Data for AI

    The Ethical AI Startup Ecosystem 02: Data for AI

  • The coming AI 'culture war'

    The coming AI 'culture war'

  • Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

    Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

  • Responsible sourcing and the professionalization of data work

    Responsible sourcing and the professionalization of data work

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • AI Ethics During Warfare: An Evolving Paradox

    AI Ethics During Warfare: An Evolving Paradox

  • Open Letter: Moving Forward Together – MAIEI’s Next Chapter

    Open Letter: Moving Forward Together – MAIEI’s Next Chapter

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.