

✍️ By Ogadinma Enwereazu.
Ogadinma is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights the Texas Responsible Artificial Intelligence Governance Act, focusing on the final updates and comparing it to New York’s AI Act.
Texas and New York: Comparing U.S. State-Level AI Laws
The years 2024 and 2025 have seen a significant increase in state-level AI policy. In May, we wrote an article on the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which at the time had not yet been signed into law. This current article compares the signed TRAIGA, enacted on June 22, 2025, with the New York Artificial Intelligence Act (S1169A).
What makes the TRAIGA stand out is both its timing and scope. At a time when U.S. Congress continues to debate national AI legislation like the One Big Beautiful Bill, with efforts to block states from passing their own AI laws, Texas has chosen to act independently.
The TRAIGA establishes ethical and transparent procurement guidelines for agencies purchasing AI systems, bans state agents or government contractors from intentionally using AI systems in a manner that discriminates against individuals based on protected characteristics, and prohibits AI-driven social scoring. The law also creates a regulatory sandbox to encourage AI innovation by allowing companies to test AI tools under reduced oversight. The fines for non-compliance range from $10,000 to $12,000 for curable violations, and $80,000 to $200,000 for uncurable violations.
The final TRAIGA has been criticized for its narrow scope, as it does not regulate private sector AI deployment and fails to offer individual consumers ample rights to sue for AI-related harms. It remains uncertain whether major tech companies and private employers in Texas will continue deploying unregulated AI on civilians. Additionally, proving discrimination would require evidence of intent, making it challenging to hold agencies or vendors accountable for ‘unintended’ AI biases, especially in high-risk areas such as predictive policing or benefit eligibility systems.
In contrast, New York’s AI Act (S1169A) takes a consumer protection and civil rights approach. The bill covers both developers and deployers of high-risk AI systems that impact critical decisions, such as hiring, lending, education, housing, and healthcare. New York mandates third-party AI audits for bias and algorithmic discrimination. Deployers must provide prior notice to individuals when AI will be used in decision-making, and individuals have the right to opt out or appeal any AI-generated decision. Importantly, New York’s AI Act offers a private right of action, allowing individuals to sue for damages if they suffer harm due to non-compliant AI systems.
Another key difference lies in each state’s attitude toward AI impact assessment. On the one hand, the TRAIGA lacks a strong audit requirement, as it focuses more on incident reporting after harm has occurred. On the other hand, the New York AI Act requires developers and deployers to maintain Risk Management Programs aligned with the NIST AI Risk Management Framework, reflecting a more precautionary philosophy. The New York AI Act also includes whistleblower protections, ensuring that ethical concerns raised internally do not result in retaliation.
In sum, Texas appears to favour innovation and government self-regulation, while New York lays more emphasis on individual remedies and corporate accountability.