• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Artificial Intelligence: the global landscape of ethics guidelines

October 8, 2021

🔬 Research Summary by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.

[Original paper by Anna Jobin, Marcello Ienca, Effy Vayena]


Overview:  Many private companies, research institutions, and public sectors have formulated guidelines for ethical AI. But, what constitutes “ethical AI,” and which ethical requirements, standards, and best practices are required for its realization. This paper investigates whether there is an emergence of a global agreement on these questions. Further, it analyzes the current corpus of principles and guidelines on ethical AI.


Introduction

There has been continuous and vigorous debate around AI technologies and their transformative impact on societies. While most studies establish that AI brings many advantages, they also underline numerous ethical, legal, and economic concerns primarily relating to human rights and freedoms. Then there are concerns that AI may “jeopardize jobs for human workers, be exploited by malicious actors, or inadvertently disseminate bias and thereby undermine fairness.” 

National and international organizations are looking at solutions to tackle the risks associated with the development of AI by developing ad hoc expert committees. Examples include: the High-Level Expert Group on Artificial Intelligence appointed by the European Commission, the Advisory Council on the Ethical Use of Artificial Intelligence and Data in Singapore, and the select committee on Artificial Intelligence of the United Kingdom (UK) House of Lords. Private companies like Google, and SAP have also released their principles and guidelines on AI. Professional associations and non-governmental organizations such as the Association of Computing Machinery (ACM), Access Now, and Amnesty International have come forward with their own recommendations. Active involvement of different stakeholders in issuing AI policies and guidelines proves the strong interest in shaping the ethics of AI in order to meet their respective priorities. 

The researchers pose the following questions:

  • Are these groups converging on what ethical AI should be, and the ethical principles that will determine the development of AI?
  • And, if they diverge, then what are these differences, and can they be reconciled?

Key Insights

Results

The researchers conducted a review of the existing corpus of guidelines on ethical AI. The search identified 84 documents containing ethical principles or guidelines for AI.

  • Data reveal a significant increase in the number of publications, with 88% having been released after 2016.
  • Most documents were produced by private companies ( 22.6%) and governmental agencies respectively (21.4%), followed by academic and research institutions (10.7%), inter-governmental or supra-national organizations (9.5%), non-profit organizations, and professional associations/scientific societies ( 8.3% each), private sector alliances (4.8%), research alliances ( 1.2%), science foundations ( 1.2%), federations of worker unions (1.2%) and political parties (1.2%). Four documents were issued by initiatives belonging to more than one of the above categories and four more could not be classified at all (4.8% each).
  • In terms of geographic distribution: a significant representation came from more economically developed countries (MEDC). The USA (23.8%) and the UK (16.7%) together account for more than a third of all ethical AI principles, followed by Japan (4.8%), Germany, France, and Finland (3.6% each).
  • Ethical values and principles: Eleven (11) overarching ethical values and principles have emerged from the content analysis. These are by frequency of the number of sources in which they appeared: transparency, justice, and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, dignity, sustainability, and solidarity. 
  • The researchers found that no single ethical principle was found common to the entire corpus of document, however, an emerging convergence was found around the following principles: transparency, justice and fairness, non-maleficence, responsibility, and privacy.

Discussion

  • The proportion of documents issued by the public and private sectors indicate that ethical challenges of AI concern both the stakeholders. However, there is a notable divergence in the solutions proposed.
  • Further, there seems to be an underrepresentation of geographic areas such as South and Central America, Africa, and Asia which insinuates that the international debate on AI may not be happening in equal measures. It seems that MEDC is shaping this debate, which raises concerns about “neglecting local knowledge, cultural pluralism and global fairness.”
  • There is an emergence of a cross-stakeholder convergence on promoting the ethical principles of transparency, justice, non-maleficence, responsibility, and privacy. However, the thematic analysis shows divergences in four (4) areas: 1) how ethical principles are interpreted, 2) why they are deemed important, 3) what issue, domain or actors they pertain to, and 4) how they should be implemented. It remains ambiguous as to which ethical principle should be prioritized, how the conflicts between the principles should be resolved, the enforcement mechanism on AI, and how institutions and researchers can comply with the resulting guidelines. 

The research indicates an emerging consensus around the promotion of some ethical principles, however thematic analysis provides a complicated narrative as “there are critical differences in how these principles are interpreted as well as what requirements are considered to be necessary for their realization.”

Between the lines

It seems that the different stakeholders seem to converge on the importance of transparency, responsibility non- non-maleficence, and privacy for the development and deployment of ethical AI. However, the researchers also call for underrepresented ethical principles such as solidarity, human dignity, sustainability that would most likely result in better articulation of the ethical landscape for AI. Moreover, it is high time there is a shift in focus from principle-formulation  into actual practice. Finally, a global scheme for ethical AI should “balance the need for cross-national and cross-domain harmonization over the respect for cultural diversity and moral pluralism.”

NOTE: The researchers acknowledge limitations in the study. First, the guidelines and soft-law documents are an example of gray literature, and thereby not indexed in conventional databases. Second, a language bias may have skewed the corpus towards English results. Finally, given the rapid frequency of publication, there is a possibility that new policies were published after the research was completed.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

related posts

  • Ethics and Governance of Trustworthy Medical Artificial Intelligence

    Ethics and Governance of Trustworthy Medical Artificial Intelligence

  • Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

    Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

  • The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

    The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

  • The Ethical Implications of Generative Audio Models: A Systematic Literature Review

    The Ethical Implications of Generative Audio Models: A Systematic Literature Review

  • Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

    Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • 3 activism lessons from Jane Goodall you can apply in AI Ethics

    3 activism lessons from Jane Goodall you can apply in AI Ethics

  • AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

    AI Art and Misinformation: Approaches and Strategies for Media Literacy and Fact-Checking

  • Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

    Should you make your decisions on a WhIM? Data-driven decision-making using a What-If Machine for Ev...

  • From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

    From Sparse to Dense: GPT-4 Summarization with Chain of Density Prompting

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.