• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

May 1, 2019

Read the Canada Protocol: AI checklist for Mental Health & Suicide Prevention. Download


The Canada Protocol is an open-access project for Artificial Intelligence and Big Data developers, decision-makers, professionals, researchers, and anyone thinking about using AI. It was created by Carl Mörch and Ahbishek Gupta in Montreal, Canada.

It started as a part of Carl Mörch’s Ph.D. The Mental Health version was scientifically supervised by Brian L. Mishara, his Ph.D. supervisor.

AI is a source of immense hopes and valid concerns. It’s challenging to know how to remain ethical when using AI. That’s why we started the Canada Protocol. We have synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. Our intention is to gather all of the existing scientific and validated recommendations on how to address AI’s ethical risks and challenges. We hope this project can help you!

The Mental Health and Suicide Prevention checklist focuses on the very specific challenges of using AI in the context of Mental Health and Suicide Prevention.

We created a checklist to review the key potential ethical questions. It was validated in 2018 by 16 experts and professionals with a two-round Delphi consultation.

Why Mental Health & AI?

Mental Health has been transformed by the rise of AI and Big Data (Luxton, 2014). Professionals, researchers and companies increasingly use AI to detect at-risk individuals and depressed users, study emotions, increase motivation, improve public health strategies and the list goes on. But as promising as it might be, AI raises many complex ethical challenges, such as the difficulty of obtaining consent or the risk of divulging private information.

The Development

We synthesized and analyzed over 40 reports, professional guidelines, and key studies on AI & Ethics. We have collected over 300 mentions of challenges. We deduplicated the items and selected the most relevant to Mental Health & Suicide Prevention. We then invited international experts, AI developers, researchers specialized in ethics, ICT and Health to provide feedback using the Delphi Method, a method commonly used in Healthcare to gather expert opinions and reach a consensus on specific topics.

Why a checklist?

Checklists are frequently used in health care. They can be used for a wide range of reasons: to help clinicians diagnosing, to make sure a research methodology has been well implemented, or to improve public health strategies. They can be very useful by summarizing key recommendations and best practices.

How does it work?

This version of the Canada Protocol is a checklist. It invites you to review 38 key ethical questions when AI is used in the context of Mental Health Care or Suicide Prevention. The user is asked to read each item and thus review your practices and how your Autonomous Intelligent System (IEEE, 2016) works.

Who are the people that developed it?

Abhishek Gupta is the founder of the Montreal AI Ethics Institute and an AI Ethics researcher working on creating ethical, safe and inclusive AI systems. He also works as a Software Engineer doing machine learning at Microsoft in Montreal where he sits on the AI Ethics Review Board for Commercial Software Engineering.

Carl Mörch is a French psychologist and lecturer at Université du Québec à Montréal (UQAM), Canada. He holds a M.Psy. from the Ecole de Psychologues Praticiens and is specialized in the use of Information and Communication Technologies in Psychology. He is currently finishing a Ph.D. at UQAM on the use of Artificial Intelligence and Big Data in Suicide Prevention. He also works with the Epione Lab at UQAM on the use of text-messaging to improve universal prevention strategies.

Camille Vézy is a PhD student in communication studies at the University of Montreal. She was involved in the coconstruction process of the Montreal Declaration for Responsible AI where she animated focus groups about the ethical impacts of AI in education, analyzed the collected data for the final report and conducted research about digital literacy. She is also chair of TechnoCultureClub’s board of directors, a non-profit organization that fosters active community participation in culture by supporting the development of new practices and uses of technology.

Brian Mishara is the Founder of the Centre for Research and Intervention on Suicide, Ethical Issues and End-of-Life Practices (CRISE). He has been director of the CRISE since 1996. He is a Professor in the Psychology Department at the University of Quebec since 1979. Internationally renowned researcher in the field of Suicidology, Brian Mishara was a co-founder of the Quebec Association for Suicide Prevention, was President of the Canadian Association for Suicide Prevention (CASP) and was President of the International Association for Suicide Prevention (IASP) from 2005 to 2009. He has authored a book on New Technologies in Suicide Prevention in 2013.


Read through The Canada Protocol here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • How Naysan Saran disrupted water quality detection in one hackathon

    How Naysan Saran disrupted water quality detection in one hackathon

  • 10 Takeaways from the State of AI Ethics in Canada & Spain

    10 Takeaways from the State of AI Ethics in Canada & Spain

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • The Social Contract for AI

    The Social Contract for AI

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

    Abhishek Gupta on AI Ethics at the HBS Tech Conference (Keynote Summary)

  • Response to Scotland's AI Strategy

    Response to Scotland's AI Strategy

  • How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

    How Kathleen Siminyu created Kenya’s go-to space for Women in Machine Learning

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.