• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Explaining the Principles to Practices Gap in AI

June 14, 2021

🔬 Research summary by Abhishek Gupta (@atg_abhishek), our Founder, Director, and Principal Researcher.

[Original paper by Daniel Schiff, Bogdana Rakova, Aladdin Ayesh, Anat Fanti, Michael Lennon]


Overview: As many principles permeate the development of AI to guide it into ethical, safe, and inclusive outcomes, we face a challenge. There is a significant gap in their implementation in practice. This paper outlines some potential causes for this challenge in corporations: misalignment of incentives, the complexity of AI’s impacts, disciplinary divide, organizational distribution of responsibilities, governance of knowledge, and challenges with identifying best practices. It concludes with a set of recommendations on how we can address these challenges.


Introduction

Have you found yourself inundated with ethical guidelines being published at a rapid clip? It is not uncommon to feel overwhelmed with many, often conflicting, sets of guidelines in AI ethics. The OECD AI repository alone contains more than 100 documents! We might also experience a gap in the actual implementation of these guidelines leaving much to be desired after several rounds of discussions. The authors attempt to structure these gaps into some common themes. They emphasize the use of impact assessments and structured interventions through a framework that is broad, operationalizable, flexible, iterative, guided, and participatory.  

What are the gaps?

The paper starts by highlighting some initiatives from corporations outlining their AI ethics commitments. What they find is that these are often vague and high-level; in particular, without practical guidance for implementation and empirical evidence on their effectiveness, the claims of being ethical are no more than promises without action. 

Starting with the incentives gap, the authors highlight how an organization should be viewed not as a monolith but as a collection of entities that have different incentives that may or may not be aligned with the responsible use of AI. They also warn people that companies might engage in the domain of AI ethics to ameliorate their position with their customers and to build trust, a tactic known as ethics shopping, ethics washing, or ethics shirking. Such an approach minimizes accountability on their part while maximizing virtue signaling. Thus, aligning the organization’s purpose, mission, and vision with the responsible use of AI can help alleviate this challenge, utilizing them as “value levers.”

AI’s impacts are notoriously hard to delineate and assess, especially when they have second- or third-order effects. We need to approach this from an intersectionality perspective to better understand the interdependence of these systems on the environment surrounding them. This is important because the harms from AI systems don’t arise in a straightforward way from a single product. 

Thinking about these intersectional concerns requires working with stakeholders across disciplines but they come from different technical and ethical training backgrounds that make convergence and shared understanding difficult. Discussions also tend to focus sometimes on futuristic scenarios that may or may not come to pass and unrealistic generalizations make the conversation untenable and impractical. Within the context of an organization, when such discussions take place, there is a risk that the ethicists and other stakeholders participating in these conversations don’t have enough decision-making power to affect change. There is often a diffusion of responsibility laterally and vertically in an organization that can make concrete action hard. 

Finally, there is now a proliferation of technical tools to address bias, privacy, and other ethics issues. Yet, a lot of them come without specific and actionable guidance on how to put them into practice. They sometimes also lack guidance on how to customize and troubleshoot them for different scenarios further limiting their applicability. 

What an impact assessment framework can do

The authors propose an impact assessment framework characterized by the following properties: broad, operationalizable, flexible, iterative, guided, and participatory with brief explanations of each of these tenets. This framing also includes the notion of measuring impacts and not just speculating about them. In particular, contrasted with other impact assessment frameworks, they emphasize the need to shy away from anticipating impacts that are assumed to be important and being more deliberate in one’s choices. As a way of normalizing this practice more, they advocate for including these ideas in the curricula in addition to the heavy emphasis that current courses have on privacy and bias and their technical solutions. The paper concludes with an example about applying this framework to forestation and highlights how carbon sequestration impacts should also consider the socio-ecological needs, for example, those of indigenous communities. 

Between the lines

It’s great to see frameworks that are centred on practical interventions more than abstract ideas. The gap between principles and practices today is stark and such an ontology helps an organization better understand where they can make improvements. We need more such work and the next iteration of such a research endeavour is to apply the ideas presented in this paper in practice and see if they hold up to empirical scrutiny.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary:  Algorithmic Bias: On the Implicit Biases of Social Technology

    Research summary: Algorithmic Bias: On the Implicit Biases of Social Technology

  • Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

    Computers, Creativity and Copyright: Autonomous Robot’s Status, Authorship, and Outdated Copyright L...

  • “Made by Humans” Still Matters

    “Made by Humans” Still Matters

  • Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

    Owning Ethics: Corporate Logics, Silicon Valley, and the Institutionalization of Ethics (Research Su...

  • On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

    On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

  • Slow AI and The Culture of Speed

    Slow AI and The Culture of Speed

  • AI Ethics: Enter the Dragon!

    AI Ethics: Enter the Dragon!

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • Structured access to AI capabilities: an emerging paradigm for safe AI deployment

    Structured access to AI capabilities: an emerging paradigm for safe AI deployment

  • Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

    Response to Office of the Privacy Commissioner of Canada Consultation Proposals pertaining to amendm...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.