• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

May 1, 2019

Read the Canada Protocol: AI checklist for Mental Health & Suicide Prevention. Download


The Canada Protocol is an open-access project for Artificial Intelligence and Big Data developers, decision-makers, professionals, researchers, and anyone thinking about using AI. It was created by Carl Mörch and Ahbishek Gupta in Montreal, Canada.

It started as a part of Carl Mörch’s Ph.D. The Mental Health version was scientifically supervised by Brian L. Mishara, his Ph.D. supervisor.

AI is a source of immense hopes and valid concerns. It’s challenging to know how to remain ethical when using AI. That’s why we started the Canada Protocol. We have synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. Our intention is to gather all of the existing scientific and validated recommendations on how to address AI’s ethical risks and challenges. We hope this project can help you!

The Mental Health and Suicide Prevention checklist focuses on the very specific challenges of using AI in the context of Mental Health and Suicide Prevention.

We created a checklist to review the key potential ethical questions. It was validated in 2018 by 16 experts and professionals with a two-round Delphi consultation.

Why Mental Health & AI?

Mental Health has been transformed by the rise of AI and Big Data (Luxton, 2014). Professionals, researchers and companies increasingly use AI to detect at-risk individuals and depressed users, study emotions, increase motivation, improve public health strategies and the list goes on. But as promising as it might be, AI raises many complex ethical challenges, such as the difficulty of obtaining consent or the risk of divulging private information.

The Development

We synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. We have collected over 300 mentions of challenges. We deduplicated the items and selected the most relevant to Mental Health & Suicide Prevention. We then invited international experts, AI developers, researchers specialized in ethics, ICT and Health to provide feedback using the Delphi Method, a method commonly used in Healthcare to gather expert opinions and reach a consensus on specific topics.

Why a checklist?

Checklists are frequently used in health care. They can be used for a wide range of reasons: to help clinicians diagnosing, to make sure a research methodology has been well implemented, or to improve public health strategies. They can be very useful by summarizing key recommendations and best practices.

How does it work?

This version of the Canada Protocol is a checklist. It invites you to review 38 key ethical questions when AI is used in the context of Mental Health Care or Suicide Prevention. The user is asked to read each item and thus review your practices and how your Autonomous Intelligent System (IEEE, 2016) works.

Who are the people that developed it?

Abhishek Gupta is the founder of the Montreal AI Ethics Institute and an AI Ethics researcher working on creating ethical, safe and inclusive AI systems. He also works as a Software Engineer doing machine learning at Microsoft in Montreal where he sits on the AI Ethics Review Board for Commercial Software Engineering.

Carl Mörch is a French psychologist and lecturer at Université du Québec à Montréal (UQAM), Canada. He holds a M.Psy. from the Ecole de Psychologues Praticiens and is specialized in the use of Information and Communication Technologies in Psychology. He is currently finishing a Ph.D. at UQAM on the use of Artificial Intelligence and Big Data in Suicide Prevention. He also works with the Epione Lab at UQAM on the use of text-messaging to improve universal prevention strategies.

Camille Vézy is a PhD student in communication studies at the University of Montreal. She was involved in the coconstruction process of the Montreal Declaration for Responsible AI where she animated focus groups about the ethical impacts of AI in education, analyzed the collected data for the final report and conducted research about digital literacy. She is also chair of TechnoCultureClub’s board of directors, a non-profit organization that fosters active community participation in culture by supporting the development of new practices and uses of technology.

Brian Mishara is the Founder of the Centre for Research and Intervention on Suicide, Ethical Issues and End-of-Life Practices (CRISE). He has been director of the CRISE since 1996. He is a Professor in the Psychology Department at the University of Quebec since 1979. Internationally renowned researcher in the field of Suicidology, Brian Mishara was a co-founder of the Quebec Association for Suicide Prevention, was President of the Canadian Association for Suicide Prevention (CASP) and was President of the International Association for Suicide Prevention (IASP) from 2005 to 2009. He has authored a book on New Technologies in Suicide Prevention in 2013.


Read through The Canada Protocol here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • System Cards for AI-Based Decision-Making for Public Policy

    System Cards for AI-Based Decision-Making for Public Policy

  • An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

    An Empirical Analysis of Racial Categories in the Algorithmic Fairness Literature

  • From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

    From OECD to India: Exploring cross-cultural differences in perceived trust, responsibility and reli...

  • Research summary:  Learning to Complement Humans

    Research summary: Learning to Complement Humans

  • Routing with Privacy for Drone Package Delivery Systems

    Routing with Privacy for Drone Package Delivery Systems

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

  • The Values Encoded in Machine Learning Research

    The Values Encoded in Machine Learning Research

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.