• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

September 15, 2025

✍️ By Ruth Sugiarto.

Ruth is an Undergraduate Student in Computer Engineering and a Research Assistant at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights how recent AI-related teen suicides are catalyzing a new wave of state legislation, with Illinois and New York pioneering contrasting frameworks that may shape national approaches to AI mental health governance.


Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Source: Manatt Health: Health AI Policy Tracker (July 2025)

This past August, parents of California teen Adam Raine sued OpenAI, claiming ChatGPT failed to prevent his suicide despite signs in chat logs. Discussions surrounding unregulated AI increased in light of others affected, like Florida teen Sewell Setzer, who also tragically took his life after forming a relationship with an AI chatbot.

State legislatures are responding by implementing frameworks surrounding AI in mental health-related discussions. For example, California passed legislation that enforces safeguards and protects the right to sue developers.

This article compares two other recent bills: the Illinois HB 1806 Bill and the New York SB 3008 Bill.

Comparison of the Frameworks

The Illinois HB 1806, or the Wellness and Oversight for Psychological Resources Act, protects clients by requiring licensed professionals to approve AI-made decisions. The intention is to limit AI’s role as a mental health provider. It emphasizes proper licensing and strictly limits the role of AI in mental healthcare. 

  1. Without review or approval from a licensed professional, AI cannot make independent decisions or directly provide therapy to clients.
  2. AI cannot be used to detect emotions or mental states. 

This bill doesn’t address measures AI must take in critical situations because it intends to block parasocial relationships entirely in a professional context. However, that is exactly the issue with this framework, because it doesn’t address AI use outside of professional therapy, which arguably applies to more individuals.

The New York Senate Bill SB 3008 (2025) takes a more progressive approach to address the general AI use by forgoing strict limits. Instead, it enforces transparency and safety measures, implementing specific language of how AI must respond in critical situations.

  1. The “AI Companion”, defined as “a system using artificial intelligence, generative artificial intelligence, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user”, must clearly stat at the beginning and once every three hours that it isn’t human.
  2. Developers must ensure AI refers users to necessary services, such as the 988 hotline, if users show self-harm ideations. 

By generally defining “AI companions,” New York legislation combats parasocial relationships, regardless of whether AI acts as a professional therapist or as a casual chatbot character. Additionally, developers must demonstrate how they implement measures to refer users to professional help.

The lack of language regarding AI in clinical settings is also a weakness in New York’s bill. These settings must be treated sensitively. Improper regulation regarding client confidentiality, consent, and involvement of licensed professionals could lead to disastrous situations. However, it’s important to note that New York’s legislation succeeds in adapting to rising AI use. It is unreasonable to assume that barring AI will protect mental health in any setting.

New York legislation approaches it from the other side by requiring frequent, explicit message reminders that AI isn’t human and placing safeguards to refer users to professional services. This can prevent parasocial relationships. Understanding key differences in these frameworks can lead to better future legislation on navigating AI and mental health. 

Further Reading

  1. Exploring the Dangers of AI in Mental Health Care (June 2025)
  2. Your AI therapist might be illegal soon. Here’s why (Aug 2025)
  3. Artificial intelligence in mental health care (Mar 2025)

Photo credit: Photo by Rogelio Gonzalez on Unsplash

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The Nonexistent Moral Agency of Robots – A Lack of Intentionality and Free Will

    The Nonexistent Moral Agency of Robots – A Lack of Intentionality and Free Will

  • Bias and Fairness in Large Language Models: A Survey

    Bias and Fairness in Large Language Models: A Survey

  • In 2020, Nobody Knows You’re a Chatbot

    In 2020, Nobody Knows You’re a Chatbot

  • Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

    Anthropomorphized AI as Capitalist Agents: The Price We Pay for Familiarity

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

    Research summary: Out of the Laboratory and Into the Classroom: The Future of AI in Education

  • Quantifying the Carbon Emissions of Machine Learning

    Quantifying the Carbon Emissions of Machine Learning

  • Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

    Conversational Swarm Intelligence (CSI) Enhances Groupwise Deliberation

  • Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

    Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.