🔬 Research Summary by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.
[Original paper by Anna Jobin, Marcello Ienca, Effy Vayena]
Overview: Many private companies, research institutions, and public sectors have formulated guidelines for ethical AI. But, what constitutes “ethical AI,” and which ethical requirements, standards, and best practices are required for its realization. This paper investigates whether there is an emergence of a global agreement on these questions. Further, it analyzes the current corpus of principles and guidelines on ethical AI.
There has been continuous and vigorous debate around AI technologies and their transformative impact on societies. While most studies establish that AI brings many advantages, they also underline numerous ethical, legal, and economic concerns primarily relating to human rights and freedoms. Then there are concerns that AI may “jeopardize jobs for human workers, be exploited by malicious actors, or inadvertently disseminate bias and thereby undermine fairness.”
National and international organizations are looking at solutions to tackle the risks associated with the development of AI by developing ad hoc expert committees. Examples include: the High-Level Expert Group on Artificial Intelligence appointed by the European Commission, the Advisory Council on the Ethical Use of Artificial Intelligence and Data in Singapore, and the select committee on Artificial Intelligence of the United Kingdom (UK) House of Lords. Private companies like Google, and SAP have also released their principles and guidelines on AI. Professional associations and non-governmental organizations such as the Association of Computing Machinery (ACM), Access Now, and Amnesty International have come forward with their own recommendations. Active involvement of different stakeholders in issuing AI policies and guidelines proves the strong interest in shaping the ethics of AI in order to meet their respective priorities.
The researchers pose the following questions:
- Are these groups converging on what ethical AI should be, and the ethical principles that will determine the development of AI?
- And, if they diverge, then what are these differences, and can they be reconciled?
The researchers conducted a review of the existing corpus of guidelines on ethical AI. The search identified 84 documents containing ethical principles or guidelines for AI.
- Data reveal a significant increase in the number of publications, with 88% having been released after 2016.
- Most documents were produced by private companies ( 22.6%) and governmental agencies respectively (21.4%), followed by academic and research institutions (10.7%), inter-governmental or supra-national organizations (9.5%), non-profit organizations, and professional associations/scientific societies ( 8.3% each), private sector alliances (4.8%), research alliances ( 1.2%), science foundations ( 1.2%), federations of worker unions (1.2%) and political parties (1.2%). Four documents were issued by initiatives belonging to more than one of the above categories and four more could not be classified at all (4.8% each).
- In terms of geographic distribution: a significant representation came from more economically developed countries (MEDC). The USA (23.8%) and the UK (16.7%) together account for more than a third of all ethical AI principles, followed by Japan (4.8%), Germany, France, and Finland (3.6% each).
- Ethical values and principles: Eleven (11) overarching ethical values and principles have emerged from the content analysis. These are by frequency of the number of sources in which they appeared: transparency, justice, and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity.
- The researchers found that no single ethical principle was found common to the entire corpus of document, however, an emerging convergence was found around the following principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy.
- The proportion of documents issued by the public and private sectors indicate that ethical challenges of AI concern both the stakeholders. However, there is a notable divergence in the solutions proposed.
- Further, there seems to be an underrepresentation of geographic areas such as South and Central America, Africa, and Asia which insinuates that the international debate on AI may not be happening in equal measures. It seems that MEDC is shaping this debate, which raises concerns about “neglecting local knowledge, cultural pluralism and global fairness.”
- There is an emergence of a cross-stakeholder convergence on promoting the ethical principles of transparency, justice, non-maleficence, responsibility, and privacy. However, the thematic analysis shows divergences in four (4) areas: 1) how ethical principles are interpreted, 2) why they are deemed important, 3) what issue, domain or actors they pertain to, and 4) how they should be implemented. It remains ambiguous as to which ethical principle should be prioritized, how the conflicts between the principles should be resolved, the enforcement mechanism on AI, and how institutions and researchers can comply with the resulting guidelines.
The research indicates an emerging consensus around the promotion of some ethical principles, however thematic analysis provides a complicated narrative as “there are critical differences in how these principles are interpreted as well as what requirements are considered to be necessary for their realization.”
Between the lines
It seems that the different stakeholders seem to converge on the importance of transparency, responsibility non- non-maleficence, and privacy for the development and deployment of ethical AI. However, the researchers also call for underrepresented ethical principles such as solidarity, human dignity, sustainability that would most likely result in better articulation of the ethical landscape for AI. Moreover, it is high time there is a shift in focus from principle-formulation into actual practice. Finally, a global scheme for ethical AI should “balance the need for cross-national and cross-domain harmonization over the respect for cultural diversity and moral pluralism.”
NOTE: The researchers acknowledge limitations in the study. First, the guidelines and soft-law documents are an example of gray literature, and thereby not indexed in conventional databases. Second, a language bias may have skewed the corpus towards English results. Finally, given the rapid frequency of publication, there is a possibility that new policies were published after the research was completed.