• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Explaining the Principles to Practices Gap in AI

June 14, 2021

🔬 Research summary by Abhishek Gupta (@atg_abhishek), our Founder, Director, and Principal Researcher.

[Original paper by Daniel Schiff, Bogdana Rakova, Aladdin Ayesh, Anat Fanti, Michael Lennon]


Overview: As many principles permeate the development of AI to guide it into ethical, safe, and inclusive outcomes, we face a challenge. There is a significant gap in their implementation in practice. This paper outlines some potential causes for this challenge in corporations: misalignment of incentives, the complexity of AI’s impacts, disciplinary divide, organizational distribution of responsibilities, governance of knowledge, and challenges with identifying best practices. It concludes with a set of recommendations on how we can address these challenges.


Introduction

Have you found yourself inundated with ethical guidelines being published at a rapid clip? It is not uncommon to feel overwhelmed with many, often conflicting, sets of guidelines in AI ethics. The OECD AI repository alone contains more than 100 documents! We might also experience a gap in the actual implementation of these guidelines leaving much to be desired after several rounds of discussions. The authors attempt to structure these gaps into some common themes. They emphasize the use of impact assessments and structured interventions through a framework that is broad, operationalizable, flexible, iterative, guided, and participatory.  

What are the gaps?

The paper starts by highlighting some initiatives from corporations outlining their AI ethics commitments. What they find is that these are often vague and high-level; in particular, without practical guidance for implementation and empirical evidence on their effectiveness, the claims of being ethical are no more than promises without action. 

Starting with the incentives gap, the authors highlight how an organization should be viewed not as a monolith but as a collection of entities that have different incentives that may or may not be aligned with the responsible use of AI. They also warn people that companies might engage in the domain of AI ethics to ameliorate their position with their customers and to build trust, a tactic known as ethics shopping, ethics washing, or ethics shirking. Such an approach minimizes accountability on their part while maximizing virtue signaling. Thus, aligning the organization’s purpose, mission, and vision with the responsible use of AI can help alleviate this challenge, utilizing them as “value levers.”

AI’s impacts are notoriously hard to delineate and assess, especially when they have second- or third-order effects. We need to approach this from an intersectionality perspective to better understand the interdependence of these systems on the environment surrounding them. This is important because the harms from AI systems don’t arise in a straightforward way from a single product. 

Thinking about these intersectional concerns requires working with stakeholders across disciplines but they come from different technical and ethical training backgrounds that make convergence and shared understanding difficult. Discussions also tend to focus sometimes on futuristic scenarios that may or may not come to pass and unrealistic generalizations make the conversation untenable and impractical. Within the context of an organization, when such discussions take place, there is a risk that the ethicists and other stakeholders participating in these conversations don’t have enough decision-making power to affect change. There is often a diffusion of responsibility laterally and vertically in an organization that can make concrete action hard. 

Finally, there is now a proliferation of technical tools to address bias, privacy, and other ethics issues. Yet, a lot of them come without specific and actionable guidance on how to put them into practice. They sometimes also lack guidance on how to customize and troubleshoot them for different scenarios further limiting their applicability. 

What an impact assessment framework can do

The authors propose an impact assessment framework characterized by the following properties: broad, operationalizable, flexible, iterative, guided, and participatory with brief explanations of each of these tenets. This framing also includes the notion of measuring impacts and not just speculating about them. In particular, contrasted with other impact assessment frameworks, they emphasize the need to shy away from anticipating impacts that are assumed to be important and being more deliberate in one’s choices. As a way of normalizing this practice more, they advocate for including these ideas in the curricula in addition to the heavy emphasis that current courses have on privacy and bias and their technical solutions. The paper concludes with an example about applying this framework to forestation and highlights how carbon sequestration impacts should also consider the socio-ecological needs, for example, those of indigenous communities. 

Between the lines

It’s great to see frameworks that are centred on practical interventions more than abstract ideas. The gap between principles and practices today is stark and such an ontology helps an organization better understand where they can make improvements. We need more such work and the next iteration of such a research endeavour is to apply the ideas presented in this paper in practice and see if they hold up to empirical scrutiny.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

    Policy Brief: AI’s Promise and Peril for the U.S. Government (Research summary)

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Unpacking Human-AI interaction (HAII) in safety-critical industries

    Unpacking Human-AI interaction (HAII) in safety-critical industries

  • Understanding technology-induced value change: a pragmatist proposal

    Understanding technology-induced value change: a pragmatist proposal

  • System Cards for AI-Based Decision-Making for Public Policy

    System Cards for AI-Based Decision-Making for Public Policy

  • “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

    “A Proposal for Identifying and Managing Bias in Artificial Intelligence”. A draft from the NIST

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

  • Open and Linked Data Model for Carbon Footprint Scenarios

    Open and Linked Data Model for Carbon Footprint Scenarios

  • Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

    Research summary: Challenges in Supporting Exploratory Search through Voice Assistants

  • A fair pricing model via adversarial learning

    A fair pricing model via adversarial learning

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.