• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

September 15, 2025

✍️ By Ruth Sugiarto.

Ruth is an Undergraduate Student in Computer Engineering and a Research Assistant at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights how recent AI-related teen suicides are catalyzing a new wave of state legislation, with Illinois and New York pioneering contrasting frameworks that may shape national approaches to AI mental health governance.


Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Source: Manatt Health: Health AI Policy Tracker (July 2025)

This past August, parents of California teen Adam Raine sued OpenAI, claiming ChatGPT failed to prevent his suicide despite signs in chat logs. Discussions surrounding unregulated AI increased in light of others affected, like Florida teen Sewell Setzer, who also tragically took his life after forming a relationship with an AI chatbot.

State legislatures are responding by implementing frameworks surrounding AI in mental health-related discussions. For example, California passed legislation that enforces safeguards and protects the right to sue developers.

This article compares two other recent bills: the Illinois HB 1806 Bill and the New York SB 3008 Bill.

Comparison of the Frameworks

The Illinois HB 1806, or the Wellness and Oversight for Psychological Resources Act, protects clients by requiring licensed professionals to approve AI-made decisions. The intention is to limit AI’s role as a mental health provider. It emphasizes proper licensing and strictly limits the role of AI in mental healthcare. 

  1. Without review or approval from a licensed professional, AI cannot make independent decisions or directly provide therapy to clients.
  2. AI cannot be used to detect emotions or mental states. 

This bill doesn’t address measures AI must take in critical situations because it intends to block parasocial relationships entirely in a professional context. However, that is exactly the issue with this framework, because it doesn’t address AI use outside of professional therapy, which arguably applies to more individuals.

The New York Senate Bill SB 3008 (2025) takes a more progressive approach to address the general AI use by forgoing strict limits. Instead, it enforces transparency and safety measures, implementing specific language of how AI must respond in critical situations.

  1. The “AI Companion”, defined as “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user”, must clearly stat at the beginning and once every three hours that it isn’t human.
  2. Developers must ensure AI refers users to necessary services, such as the 988 hotline, if users show self-harm ideations. 

By generally defining “AI companions,” New York legislation combats parasocial relationships, regardless of whether AI acts as a professional therapist or as a casual chatbot character. Additionally, developers must demonstrate how they implement measures to refer users to professional help.

The lack of language regarding AI in clinical settings is also a weakness in New York’s bill. These settings must be treated sensitively. Improper regulation regarding client confidentiality, consent, and involvement of licensed professionals could lead to disastrous situations. However, it’s important to note that New York’s legislation succeeds in adapting to rising AI use. It is unreasonable to assume that barring AI will protect mental health in any setting.

New York legislation approaches it from the other side by requiring frequent, explicit message reminders that AI isn’t human and placing safeguards to refer users to professional services. This can prevent parasocial relationships. Understanding key differences in these frameworks can lead to better future legislation on navigating AI and mental health. 

Further Reading

  1. Exploring the Dangers of AI in Mental Health Care (June 2025)
  2. Your AI therapist might be illegal soon. Here’s why (Aug 2025)
  3. Artificial intelligence in mental health care (Mar 2025)

Photo credit: Photo by Rogelio Gonzalez on Unsplash

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

related posts

  • Who Is Governing AI Matters Just as Much as How It's Designed

    Who Is Governing AI Matters Just as Much as How It's Designed

  • The coming AI 'culture war'

    The coming AI 'culture war'

  • Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

    Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

  • What is Sovereign Artificial Intelligence?

    What is Sovereign Artificial Intelligence?

  • Data Pooling in Capital Markets and its Implications

    Data Pooling in Capital Markets and its Implications

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

    The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

  • Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

    Artificial Intelligence in healthcare: providing ease or ethical dilemmas?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.