• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

May 1, 2019

Read the Canada Protocol: AI checklist for Mental Health & Suicide Prevention. Download


The Canada Protocol is an open-access project for Artificial Intelligence and Big Data developers, decision-makers, professionals, researchers, and anyone thinking about using AI. It was created by Carl Mörch and Ahbishek Gupta in Montreal, Canada.

It started as a part of Carl Mörch’s Ph.D. The Mental Health version was scientifically supervised by Brian L. Mishara, his Ph.D. supervisor.

AI is a source of immense hopes and valid concerns. It’s challenging to know how to remain ethical when using AI. That’s why we started the Canada Protocol. We have synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. Our intention is to gather all of the existing scientific and validated recommendations on how to address AI’s ethical risks and challenges. We hope this project can help you!

The Mental Health and Suicide Prevention checklist focuses on the very specific challenges of using AI in the context of Mental Health and Suicide Prevention.

We created a checklist to review the key potential ethical questions. It was validated in 2018 by 16 experts and professionals with a two-round Delphi consultation.

Why Mental Health & AI?

Mental Health has been transformed by the rise of AI and Big Data (Luxton, 2014). Professionals, researchers and companies increasingly use AI to detect at-risk individuals and depressed users, study emotions, increase motivation, improve public health strategies and the list goes on. But as promising as it might be, AI raises many complex ethical challenges, such as the difficulty of obtaining consent or the risk of divulging private information.

The Development

We synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. We have collected over 300 mentions of challenges. We deduplicated the items and selected the most relevant to Mental Health & Suicide Prevention. We then invited international experts, AI developers, researchers specialized in ethics, ICT and Health to provide feedback using the Delphi Method, a method commonly used in Healthcare to gather expert opinions and reach a consensus on specific topics.

Why a checklist?

Checklists are frequently used in health care. They can be used for a wide range of reasons: to help clinicians diagnosing, to make sure a research methodology has been well implemented, or to improve public health strategies. They can be very useful by summarizing key recommendations and best practices.

How does it work?

This version of the Canada Protocol is a checklist. It invites you to review 38 key ethical questions when AI is used in the context of Mental Health Care or Suicide Prevention. The user is asked to read each item and thus review your practices and how your Autonomous Intelligent System (IEEE, 2016) works.

Who are the people that developed it?

Abhishek Gupta is the founder of the Montreal AI Ethics Institute and an AI Ethics researcher working on creating ethical, safe and inclusive AI systems. He also works as a Software Engineer doing machine learning at Microsoft in Montreal where he sits on the AI Ethics Review Board for Commercial Software Engineering.

Carl Mörch is a French psychologist and lecturer at Université du Québec à Montréal (UQAM), Canada. He holds a M.Psy. from the Ecole de Psychologues Praticiens and is specialized in the use of Information and Communication Technologies in Psychology. He is currently finishing a Ph.D. at UQAM on the use of Artificial Intelligence and Big Data in Suicide Prevention. He also works with the Epione Lab at UQAM on the use of text-messaging to improve universal prevention strategies.

Camille Vézy is a PhD student in communication studies at the University of Montreal. She was involved in the coconstruction process of the Montreal Declaration for Responsible AI where she animated focus groups about the ethical impacts of AI in education, analyzed the collected data for the final report and conducted research about digital literacy. She is also chair of TechnoCultureClub’s board of directors, a non-profit organization that fosters active community participation in culture by supporting the development of new practices and uses of technology.

Brian Mishara is the Founder of the Centre for Research and Intervention on Suicide, Ethical Issues and End-of-Life Practices (CRISE). He has been director of the CRISE since 1996. He is a Professor in the Psychology Department at the University of Quebec since 1979. Internationally renowned researcher in the field of Suicidology, Brian Mishara was a co-founder of the Quebec Association for Suicide Prevention, was President of the Canadian Association for Suicide Prevention (CASP) and was President of the International Association for Suicide Prevention (IASP) from 2005 to 2009. He has authored a book on New Technologies in Suicide Prevention in 2013.


Read through The Canada Protocol here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • Faith and Fate: Limits of Transformers on Compositionality

    Faith and Fate: Limits of Transformers on Compositionality

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Low-Resource Languages Jailbreak GPT-4

    Low-Resource Languages Jailbreak GPT-4

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

    AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

  • Dating Through the Filters

    Dating Through the Filters

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

    U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

  • Use case cards: a use case reporting framework inspired by the European AI Act

    Use case cards: a use case reporting framework inspired by the European AI Act

  • Responsible AI Licenses: social vehicles toward decentralized control of AI

    Responsible AI Licenses: social vehicles toward decentralized control of AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.