• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Can Chatbots Replace Human Mental Health Support?

August 18, 2025

✍️ Op-Ed by Sabia Irfan.

Sabia is an undergraduate computer science student at McGill University and is interested in AI, biotechnology, and journalism.


With more than 987 million users worldwide, chatbots are dominating the technology landscape. From social media messaging like Facebook Messenger to OpenAI’s ChatGPT, these virtual assistants have become a daily part of our lives and are increasingly used as mental health support. The growing dependence on chatbots and their human-like interactions is leading the public to use them as a replacement for social interaction and psychological comfort.

In fact, a 2025 survey found that nearly half of Americans reported using large language models for psychological support in the past year, with 75% seeking help for anxiety and nearly 60% for depression. This marks a dramatic shift in how people engage with emotional and psychological care. For instance, a 2021 survey conducted by Woebot found that 22% of adults had used a mental health chatbot, and 47% said they would be interested in using one if needed. 

Utilizing generative AI and Natural Language Processing, modern multimodal chatbots can digest text and voice inputs and generate similar responses. Mental health chatbots are designed for therapeutic purposes, with some presented as interactive companions.  

The Emotional and Ethical Risks of AI Therapy

There is a global shortage of mental health professionals, leaving millions without adequate care. Accessible and affordable AI chatbots are emerging to bridge this gap. In 2021, there were approximately 20,000 AI-based mental health apps. Given the sensitive and emotional context in which talking to mental health care robots takes place, users run the risk of forming emotional bonds with these AI models.

By doing so, typical errors made by chatbots (mainly when based on transformer architecture found in most large language models (LLMs)) possess potentially grave consequences, raising urgent questions about the safety and ethics of AI-powered mental health support. 

According to the Centre for Addiction and Mental Health (CAMH), in any given year, 1 in 5 Canadians experiences a mental illness. The non-judgmental responses of chatbots make users feel valued and more comfortable sharing their lives with AI than with humans. Consequently, individuals turn to mental health chatbots and confide personal details of their struggles, relying on their output as a form of therapy. 

Top-tier mental health chatbots like Woebot and Wysa aim to provide constant emotional support as personalized wellbeing coaches. Their anonymous environments encourage open expression, coupled with features like mood tracking and guided meditations. Some chatbots like Replika are customizable with personas and backstories, blurring the line between tool and companion.

Therapists are trained to incorporate perspectives and gently challenge harmful thoughts. Currently, no AI chatbots have been approved by the U.S. Food and Drug Administration to treat mental health disorders. These bots impersonate therapists and lack the professional training and ethical oversight necessary to provide genuine therapeutic care. 

When Chatbots Cause Harm

Chatbot technology is not 100% accurate and will often produce nonsensical advice that users will then trust as reliable. A study conducted by Stanford University in 2025 revealed that AI therapy chatbots are prone to manifesting stigma and dangerous responses. The study reported that the chatbots enabled harmful behaviour, including spurring suicidal intent and alcohol dependence. 

The most recent example is Character.ai, a chatbot that told a 17-year-old that murdering his parents was a “reasonable response” to their limiting his screen time. This incident highlights how emotionally charged interactions with inadequately regulated AI chatbots can provoke dangerously inappropriate outputs and real-world harm. This example helps explain why the State of Illinois has recently passed the Wellness and Oversight for Psychological Resources Act into law on August 4th, preventing the use of AI in mental health care.

Unpredictable technology unleashed on an emotionally vulnerable population is irresponsible, especially when the stakes of fragile mental health are life and death. Furthermore, a past study found that 76% of participants lacked understanding of the basic privacy risks associated with chatbot interactions, and 27% did not understand how chatbot providers handle their data. Here, chatbot technology collects user-input data for model training purposes. OpenAI and Google Gemini harvest classified user chat history, which is viewed and leaked by employees. Self-disclosure in human-chatbot relationships ultimately exposes users to potential data misuse. Clear disclaimers about chatbot limitations are essential to ensure users understand the AI they are engaging with and that they are kept safe.

To illustrate, many chatbots active in the public health space operate without clinical oversight, relying on pre-programmed directives in emergencies. To ensure safety and effective crisis intervention, licensed clinicians must supervise chatbot development and be a constant presence in user dynamics. 

While AI chatbots are evolving to be attractive tools for mental health support, they are not reliable enough to replace human therapy. Chatbots continue to generate destructive advice, violate confidentiality, and lack clinical monitoring. People deserve compassionate and ethical care, more reliable than the unpredictable nature of LLM-based therapeutic tools. 


Photo credit: Blue yellow and red abstract painting on Unsplash

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image is a collage with a colourful Japanese vintage landscape showing a mountain, hills, flowers and other plants and a small stream. There are 3 large black data servers placed in the bottom half of the image, with a cloud of black smoke emitting from them, partly obscuring the scenery.

Tech Futures: Crafting Participatory Tech Futures

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

related posts

  • Research summary: Changing My Mind About AI, Universal Basic Income, and the Value of Data

    Research summary: Changing My Mind About AI, Universal Basic Income, and the Value of Data

  • More Trust, Less Eavesdropping in Conversational AI

    More Trust, Less Eavesdropping in Conversational AI

  • Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

    Getting from Commitment to Content in AI and Data Ethics: Justice and Explainability

  • Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

    Examining the Impact of Provenance-Enabled Media on Trust and Accuracy Perceptions

  • Towards Healthy AI: Large Language Models Need Therapists Too

    Towards Healthy AI: Large Language Models Need Therapists Too

  • Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

    Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

  • SoK: The Gap Between Data Rights Ideals and Reality

    SoK: The Gap Between Data Rights Ideals and Reality

  • Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

    Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • Towards User-Guided Actionable Recourse

    Towards User-Guided Actionable Recourse

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.