• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendments to PIPEDA relative to Artificial Intelligence

March 22, 2020

Prepared by:

  • Mirka Snyder Caron, Sr. Associate, MAIEI
  • Abhishek Gupta, Founder, MAIEI and ML Engineer, Microsoft

To read the full 78-page consultation including a summary in PDF form (including community insights), click here.

To read just the 17 pages of community insights, click here.


Below is the introductory summary:

In February 2020, the Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide for comments both at a closed roundtable and in writing on the OPCC consultation proposal for amendments relative to Artificial Intelligence (AI), to the Canadian privacy legislation, the Personal Information Protection and Electronic Documents Act (PIPEDA).

The present document includes MAIEI comments and recommendations in writing. Per MAIEI’s mission and mandate to act as a catalyst for public feedback pertaining to AI Ethics and regulatory technology developments, as well as to provide for public competence-building workshops on critical topics in such domains, the reader will also find such public feedback and propositions by Montrealers who participated at MAIEI’s workshops, submitted as Schedule 1 to the present report. For each of OPCC 12 proposals, and underlying questions, as described on its website, MAIEI provides a short reply, a summary list of recommendations, as well as comments relevant to the question at hand.

We leave you with three general statements to keep in mind while going through the next pages:

1) AI systems should be used to augment human capacity for meaningful and purposeful connections and associations, not as a substitute for trust.

2) Humans have collectively accepted to uphold the rule of law, but for machines, the code is rule. Where socio-technical systems are deployed to make important decisions, profiles or inferences about individuals, we will increasingly have to attempt the difficult exercise of drafting and encoding our law in a manner learnable by machines.

3) Let us work collectively towards a world where Responsible AI becomes the rule, before our socio-technical systems become “too connected to fail” .

Best,
The Montreal AI Ethics Institute


To read the full 78-page consultation in PDF form, click here.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Japan’s AI Promotion Act

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

AI Policy Corner: New York City Local Law 144

related posts

  • Why the contemporary view of the relationship between AI's moral status and rights is wrong

    Why the contemporary view of the relationship between AI's moral status and rights is wrong

  • Can We Engineer Ethical AI?

    Can We Engineer Ethical AI?

  • A 16-year old AI developer's critical take on AI ethics

    A 16-year old AI developer's critical take on AI ethics

  • Response to Mila’s Proposal for a Contact Tracing App

    Response to Mila’s Proposal for a Contact Tracing App

  • Approaches to Deploying a Safe Artificial Moral Agent

    Approaches to Deploying a Safe Artificial Moral Agent

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • The Abuse and Misogynoir Playbook, explained

    The Abuse and Misogynoir Playbook, explained

  • A call for a critical look at the metrics for success in the evaluation of AI

    A call for a critical look at the metrics for success in the evaluation of AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.