• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to Artificial Intelligence

March 22, 2020

Prepared by:

  • Mirka Snyder Caron, Sr. Associate, MAIEI
  • Abhishek Gupta, Founder, MAIEI and ML Engineer, Microsoft

To read the full 78-page consultation including a summary in PDF form (including community insights), click here.

To read just the 17 pages of community insights, click here.


Below is the introductory summary:

In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide for comments both at a closed roundtable and in writing on the OPCC consultation proposal for amendments relative to Artificial Intelligence (AI), to the Canadian privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA).

The present document includes MAIEI comments and recommendations in writing. Per MAIEI’s mission and mandate to act as a catalyst for public feedback pertaining to AI Ethics and regulatory technology developments, as well as to provide for public competence-building workshops on critical topics in such domains, the reader will also find such public feedback and propositions by Montrealers who participated at MAIEI’s workshops, submitted as Schedule 1 to the present report. For each of OPCC 12 proposals, and underlying questions, as described on its website, MAIEI provides a short reply, a summary list of recommendations, as well as comments relevant to the question at hand.

We leave you with three general statements to keep in mind while going through the next pages:

1) AI systems should be used to augment human capacity for meaningful and purposeful connections and associations, not as a substitute for trust.

2) Humans have collectively accepted to uphold the rule of law, but for machines, the code is rule. Where socio-technical systems are deployed to make important decisions, profiles or inferences about individuals, we will increasingly have to attempt the difficult exercise of drafting and encoding our law in a manner learnable by machines.

3) Let us work collectively towards a world where Responsible AI becomes the rule, before our socio-technical systems become “too connected to fail” .

Best,
The Montreal AI Ethics Institute


To read the full 78-page consultation in PDF form, click here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

related posts

  • Response to Scotland's AI Strategy

    Response to Scotland's AI Strategy

  • Réponse à la Commission d’accès à l’information du Québec portant sur les amendements potentiels à l...

    Réponse à la Commission d’accès à l’information du Québec portant sur les amendements potentiels à l...

  • Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

    Consequentialism and Machine Ethics - Towards a Foundational Machine Ethic to Ensure the Ethical Con...

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • Probing Networked Agency: Where is the Locus of Moral Responsibility?

    Probing Networked Agency: Where is the Locus of Moral Responsibility?

  • A 16-year old AI developer's critical take on AI ethics

    A 16-year old AI developer's critical take on AI ethics

  • Trust me!: How to use trust-by-design to build resilient tech in times of crisis

    Trust me!: How to use trust-by-design to build resilient tech in times of crisis

  • Talk at AI for Good Global Summit 2019

    Talk at AI for Good Global Summit 2019

  • Approaches to Deploying a Safe Artificial Moral Agent

    Approaches to Deploying a Safe Artificial Moral Agent

  • The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

    The Ethics of AI in Medtech: A Discussion With Abhishek Gupta

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.