• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Response to the AHRC and WEF regarding Responsible Innovation in AI

April 16, 2019

Full AHRC report regarding responsible innovation in AIDownload


Below is the excerpted executive summary from our report.

Executive Summary:

The following paragraphs summarize prioritized comments from the Montreal AI Ethics Institute’s (“MAIEI”) pertaining to the Australian Human Rights Commission and World Economic Forum’s white paper on AI: Governance and Leadership.

If a central organization is to be established to play the role of promoting responsible innovation in AI and related technologies (the “Responsible Innovation Organization” or “RIO”), it will be very important for this organization to have public consultations be an essential part of its policy making. From our experience at the MAIEI, we have found this to be particularly effective in unearthing solutions that are interdisciplinary and contextually and culturally sensitive as well.

In the context of the RIO creating multi-stakeholder dialogue, it is the strong recommendation of the Montreal AI Ethics Institute that public consultation and engagement be a key component because it helps to surface interdisciplinary solutions, often leveraging first-hand, lived experiences that lead to more practical solutions. Additionally, such an engagement process at the grassroots level increase the degree of trust and acceptability on the part of the general public14,22 since they would have played an integral part in the shaping of the technical and policy measures that will be used to govern the systems that they are going to be affected by.

Apart from setting up an RIO, it will be essential to ensure it be able collaborate with those existing organizations that we have listed below so as to not duplicate efforts or have to re-learn things that those organizations already have years of experience in. In fact, it would be great to have a system, whereby there is a distributed intelligence of “experts” across these organizations (akin to liaisons of the RIO) that work at each of these organizations and are able to coordinate the work across the RIO and all the other organizations.

Furthermore, the scale of financial commitment required should be high to allow for meaningful work to happen and to be able to engage in the hard, long-term but ultimately impactful work of public engagement on this and building of public competence in building responsible AI systems.

When thinking about approaches, solutions, frameworks for public, private industries, care needs to be taken to make sure that the solutions are not generic and are tailored per industry, perhaps even split by sub-industries, because it is the recommendation of the institute, based on experience, that the more nuanced and specific the advice is, the more applicable, practical and integrable it is, ultimately increasing the efficacy of the work of the RIO.

However, considering AI may have an impact on all industries, it is our recommendation, at time of evaluation and implementation, to combine specific concrete solutions curtailed to an industry with a holistic approach, since it is possible to gain multiple industries’ consensus on key ethical priorities and fundamental human values. The holistic approach, supported by increased collaboration and shared expertise between regulators, while taking public and industry feedback into account, will prevent the risk of applying a siloed industry-specific approach.

Finally,  standardization without an appropriate understanding on the part of the layperson (which is commonly non-existent) is very difficult if not impossible. In fact, it is potentially more harmful to have certifications in place that purport to guarantee some adherence to a higher quality of product while preserving the rights of users, but are in effect only a hollow affirmation.

For example, the Statement of Applicability23, which is usually only revealed under an NDA, showcases the extent to which the standards were applied and to what parts of the system. So for example in cybersecurity ISO 27001 when looking at whether a system is compliant or not, one can have the certification but that doesn’t mean that all the components of the system are covered in the evaluation to obtain that certification. In fact that SoA is what tells you which parts of the system were evaluated to grant the certification.

Full AHRC report regarding responsible innovation in AIDownload
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Reflections from Microsoft's Ignite The Tour

    Reflections from Microsoft's Ignite The Tour

  • Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

    Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

  • Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

    Andrew Ng’s AI For Everyone - The Definitive Starting Block for AI Novices

  • AI in Finance: 8 Frequently Asked Questions

    AI in Finance: 8 Frequently Asked Questions

  • Can We Engineer Ethical AI?

    Can We Engineer Ethical AI?

  • Probing Networked Agency: Where is the Locus of Moral Responsibility?

    Probing Networked Agency: Where is the Locus of Moral Responsibility?

  • The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

    The Canada Protocol: AI checklist for Mental Health & Suicide Prevention

  • 10 takeaways from our meetup on AI Ethics in the APAC Region

    10 takeaways from our meetup on AI Ethics in the APAC Region

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • Response to the European Commission’s white paper on AI (2020)

    Response to the European Commission’s white paper on AI (2020)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.