🔬 Research summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.
[Original document by UNESCO]
Overview: The Director-General of the United Nations Educational, Scientific and Cultural Organization (UNESCO) convened an Ad Hoc Expert Group (AHEG) for the preparation of a Draft Text of a Recommendation on the Ethics of Artificial Intelligence (“hereinafter the Recommendation”) and submitted the draft text of the Recommendation to the special committee meeting of technical and legal experts, designated by Member States. The special committee meeting revised the draft Recommendation and approved the present text for submission to the General Conference at its 41st Session for adoption. Consequently, it was unanimously adopted by all its 193 Member States on 24.11.2021.
Introduction
The Recommendation addresses ethical issues related to AI to the extent that they are within UNESCO’s mandate. Moreover, a significant feature of the Recommendation is that, it does not provide one single definition of AI, since such a definition would need to change over time, in accordance with technological developments. Rather, its ambition is to address those features of AI systems that are of central ethical relevance. Therefore, this Recommendation approaches AI systems as systems which have the capacity to process data and information in a way that resembles intelligent behaviour, and typically includes aspects of reasoning, learning, perception, prediction, planning or control. Further, the aim of the Recommendation is to provide a basis to make AI systems work for the good of humanity and to prevent harm. To add to the above, it also aims at ‘stimulating the peaceful use of AI systems’. The Recommendation probably refers to the use of AI systems in military warfare but, whatever that means, it needs elucidation.
Core aspects of the recommendations
The Recommendation states that the policy actions proposed in it are all directed at promoting trustworthiness in all the stages of the AI system life cycle. Its values and principles are outlined below;
Values:-
- Human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI systems;
- All actors involved in the life cycle of AI systems must comply with laws, standards, practices etc., designed for environmental and ecosystem protection and restoration, and sustainable development;
- Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems, consistent with international law, including human rights law; and
- AI actors should play a participative and enabling role to ensure peaceful and just societies.
Principles:-
- The use of AI systems shall be governed by the principle of ‘necessity and proportionality’. AI systems, in particular, should not be used for social scoring or mass surveillance purposes;
- Safe and secure AI systems shall be prioritized and any threat emanating from such systems shall be addressed to ensure human and environmental well-being;
- AI actors shall safeguard fairness and non-discrimination and also ensure that the benefits of AI technologies are available to all;
- The continuous assessment of the human, social, cultural, economic and environmental impact of AI technologies should be carried out to ascertain whether they are in conformity with the sustainable goals, such as, those currently identified in the United Nations Sustainable Development Goals (UNSDGs);
- Privacy shall be protected throughout the life cycle of the AI systems;
- Member States to ensure that it is always possible to attribute ethical and legal liability arising out of AI systems to humans. Further, as a rule, life and death decisions should not be ceded to AI systems;
- Efforts need to be made to enhance transparency and explainability of AI systems, including those having extra-territorial effect, to support democratic governance;
- Appropriate oversight, impact assessment, audit and due diligence mechanisms, including whistle-blowers’ protection, should be developed to ensure accountability for AI systems;
- Public awareness and understanding of AI technologies should be promoted through open and accessible education, civic engagement, AI ethics training etc., so that people can take informed decisions regarding their use of AI systems and be protected from undue influence; and
- States shall be able to regulate the data generated within or passing through their territories, and take measures towards effective regulation of data in accordance with international law. Further, measures should be taken to allow for meaningful participation by marginalized groups.
Areas of policy action
The policy actions mentioned in the policy areas operationalize the values and principles set out in the Recommendation. It calls for member states to put in place effective measures, such as, policy frameworks and to ensure that stakeholders, such as private sector companies, academic and research institutions and civil society adhere to them by encouraging them to develop ethical impact assessment, due diligence tools etc., in line with guidance, including the United Nations Guiding Principles on Business and Human Rights. Listed below are the policy areas;
- Policy Area 1:- The Member States shall introduce frameworks for impact assessments, such as ethical impact assessments, to identify and assess benefits, concerns and risks of AI systems;
- Policy Area 2:- The Member States shall ensure that AI governance mechanisms are inclusive, transparent, multidisciplinary, multilateral and multi-stakeholder.
- Policy Area 3:– The Member States shall develop data governance strategies. Further, privacy shall be respected, protected and promoted throughout the life cycle of AI systems.
- Policy Area 4:– Both the Member States and transnational corporations shall prioritize AI ethics by including discussions on the topic in relevant international, intergovernmental and multi-stakeholder forums. Further, the Member States shall work to promote international collaboration on AI Research and innovation, particularly in the area of AI ethics.
- Policy Area 5:– The Member States and businesses shall assess the direct and indirect impact on the environment throughout the life cycle of an AI system. They shall also ensure compliance with environmental law, policies and practices by all the AI actors.
- Policy Area 6:– The Member States shall ensure that the potential of AI systems to contribute to achieving gender equality is fully maximized, and further, they must also ensure that the human rights and fundamental freedoms of girls and women, and their safety and integrity are not violated at any stage of an AI system life cycle.
- Policy Area 7:- The Member States are encouraged to incorporate AI systems, where appropriate, in the preservation, enrichment, understanding, promotion, management and accessibility of cultural heritage, including endangered languages as well as indigenous languages and knowledge.
- Policy Area 8:– The Member States shall work with international organizations, educational institutions and private and non-governmental entities to provide adequate AI literacy education to the public in order to empower people and reduce the digital divide and digital access inequalities resulting from the wide adoption of AI systems.
- Policy Area 9:– The Member States shall use AI systems to improve access to information and knowledge. This shall include support to researchers, academia, journalists, the general public and developers to enhance freedom of expression etc.
- Policy Area 10:– The Member States shall assess and address the impact of AI systems on labor markets.
- Policy Area 11:– The Member States shall endeavor to employ effective AI systems for improving human health and protecting the right to life, including mitigating disease outbreaks. Further, they shall implement policies to raise awareness about the anthropomorphization of AI technologies.
The Recommendation also directs the Member States to credibly and transparently monitor and evaluate policies, programmes and mechanisms related to ethics of AI, using a combination of quantitative and qualitative approaches, according to their specific conditions, governing structures and constitutional provisions. The Recommendation further directs that processes for monitoring and evaluation should ensure broad participation of all stakeholders, including, but not limited to, vulnerable people or people in vulnerable situations.
Between the lines
Although the Recommendation is voluntary and non-binding, it signifies ‘consensus ad idem’ amongst all the UNESCO Member States. However, there are suggestions which require elaboration. For instance, it is recommended that “Member States and business enterprises should implement appropriate measures to monitor all phases of an AI system life cycle”. Now, guidance on the meaning and mode of operation of the term ‘appropriate measures’ is imperative. Another case in point is that the Recommendation states that the “Member States that acquire AI systems for human rights-sensitive use cases, such as ………..the independent judiciary system should provide mechanisms to monitor the social and economic impact of such systems by appropriate oversight authorities, including independent data protection authorities, sectoral oversight and public bodies responsible for oversight”. A couple of questions that emerge here are firstly, can an independent judicial system be subjected to oversight by say, a data protection authority? Secondly, wouldn’t subjecting a court to further oversight when it is already under the supervisory control of a superior court, lead to an issue of overlapping of jurisdiction? There are other key areas also that need to be clarified. Further, it is interesting to note that China being a member of UNESCO has adopted the Recommendation which follows its formulation of the AI Ethics Code. However, the US is not a signatory to the Recommendation as it is not an UNESCO Member State. Now, what needs to be seen is how the Member States incorporate and operationalize the various guidelines enshrined in the Recommendation in the future.