Below is the excerpted executive summary from our report.
Executive Summary:
The following paragraphs summarize prioritized comments from the Montreal AI Ethics Institute’s (“MAIEI”) pertaining to the Australian Human Rights Commission and World Economic Forum’s white paper on AI: Governance and Leadership.
If a central organization is to be established to play the role of promoting responsible innovation in AI and related technologies (the “Responsible Innovation Organization” or “RIO”), it will be very important for this organization to have public consultations be an essential part of its policy making. From our experience at the MAIEI, we have found this to be particularly effective in unearthing solutions that are interdisciplinary and contextually and culturally sensitive as well.
In the context of the RIO creating multi-stakeholder dialogue, it is the strong recommendation of the Montreal AI Ethics Institute that public consultation and engagement be a key component because it helps to surface interdisciplinary solutions, often leveraging first-hand, lived experiences that lead to more practical solutions. Additionally, such an engagement process at the grassroots level increase the degree of trust and acceptability on the part of the general public14,22 since they would have played an integral part in the shaping of the technical and policy measures that will be used to govern the systems that they are going to be affected by.
Apart from setting up an RIO, it will be essential to ensure it be able collaborate with those existing organizations that we have listed below so as to not duplicate efforts or have to re-learn things that those organizations already have years of experience in. In fact, it would be great to have a system, whereby there is a distributed intelligence of “experts” across these organizations (akin to liaisons of the RIO) that work at each of these organizations and are able to coordinate the work across the RIO and all the other organizations.
Furthermore, the scale of financial commitment required should be high to allow for meaningful work to happen and to be able to engage in the hard, long-term but ultimately impactful work of public engagement on this and building of public competence in building responsible AI systems.
When thinking about approaches, solutions, frameworks for public, private industries, care needs to be taken to make sure that the solutions are not generic and are tailored per industry, perhaps even split by sub-industries, because it is the recommendation of the institute, based on experience, that the more nuanced and specific the advice is, the more applicable, practical and integrable it is, ultimately increasing the efficacy of the work of the RIO.
However, considering AI may have an impact on all industries, it is our recommendation, at time of evaluation and implementation, to combine specific concrete solutions curtailed to an industry with a holistic approach, since it is possible to gain multiple industries’ consensus on key ethical priorities and fundamental human values. The holistic approach, supported by increased collaboration and shared expertise between regulators, while taking public and industry feedback into account, will prevent the risk of applying a siloed industry-specific approach.
Finally, standardization without an appropriate understanding on the part of the layperson (which is commonly non-existent) is very difficult if not impossible. In fact, it is potentially more harmful to have certifications in place that purport to guarantee some adherence to a higher quality of product while preserving the rights of users, but are in effect only a hollow affirmation.
For example, the Statement of Applicability23, which is usually only revealed under an NDA, showcases the extent to which the standards were applied and to what parts of the system. So for example in cybersecurity ISO 27001 when looking at whether a system is compliant or not, one can have the certification but that doesn’t mean that all the components of the system are covered in the evaluation to obtain that certification. In fact that SoA is what tells you which parts of the system were evaluated to grant the certification.