• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Putting AI ethics to work: are the tools fit for purpose?

November 16, 2021

🔬 Research summary by Ravit Dotan, who mainly works in epistemology, philosophy of science, and philosophy of machine learning. Her secondary areas of interest are feminist philosophy and philosophy of religion.

[Original paper by Jacqui Ayling and Adriane Chapman]


Overview: This paper maps the landscape of AI ethics tools: It develops a typology to classify AI ethics tools and analyzes the existing ones. In addition, the paper identifies two gaps. First, key stakeholders, including members of marginalized communities, under-participate in using AI ethics tools and their outputs. Second, there is a lack of tools for external auditing in AI ethics, which is a barrier to the accountability and trustworthiness of organizations that develop AI systems.


Introduction

As more and more AI ethics tools are developed, it becomes difficult to get a handle on the terrain. This paper addresses the challenge by mapping and analyzing the existing AI ethics tools (as of the end of 2020).

The authors conducted a thorough search and identified 169 AI ethics documents. Of those, 39 were found to include concrete AI ethics tools. Each of the 39 tools was classified using the following questions: (i) What sector are the document’s authors from? And what sector are the users of the tools from? (ii) Which stakeholders would either use the tool or engage with the results? (iii) What type of tool is it? Which strategy does it employ? (iv) Were these tools for use internally, or did they have external elements? (v) In which stage in the AI production and use chain was the tool used? (vi) Was the tool appropriate for addressing the model, data, or both?

The paper presents statistics characterizing the tools using these questions. Among its findings, the paper uncovers that a wide stakeholder base, involving customers, the broader public, and the environment, is typically not a part of AI ethics evaluation processes. Moreover, the paper finds that almost all AI ethics tools are used internally, without external oversight. The authors emphasize that these characteristics stand in the way of accountability and trustworthiness of organizations that develop AI systems.

Key Insights

The map of AI ethics tool landscape

The paper divides AI ethics tools into three categories:

  1. Impact assessment tools

Impact assessment is a fact-finding and evaluation process that precedes or accompanies the production of artifacts, systems, or research. Ex-ante assessments are used in the use case development and testing stages. Ex post assessments are used post-deployment, in the monitoring stage, to capture the impacts of the system. The most predominant tools for impact assessments in AI ethics are checklists and questionnaires.

  1. Technical and design tools

These tools are typically developed by the ML community. Some of them are computational, e.g., computationally identifying and mitigating bias. Others are design processes, e.g., workshop-style events for raising awareness in design teams or participatory design processes. These tools are used along the whole process and can facilitate impact assessment and auditing.

  1. Auditing tools

An audit is an examination of evidence of a process or activity against some standards or metrics. To ensure transparency and to be able to place liabilities, the auditing process needs to be independent of the assessment process and from the day-to-day management of the auditee. AI ethics auditing tools are used in the late stages of the production process, when testing and monitoring the AI system. The focus of these tools is on appropriate documentation for verification and assurance. Checklists are also used for auditing, but less so.

Some statistics: The paper finds that tools for AI ethics are developed mostly by the private and academic sectors. However, the private and the public sectors are the ones that mostly use the tools. The paper also finds that more tools are developed for the early stages of the production process, namely the use case and design stages. Overall, AI ethics tools focus more on addressing models, as opposed to addressing data.

A gap in stakeholder participation

Typically, AI ethics tools are directly used by those who develop the AI system (e.g., development, delivery, quality assurance). The outputs of the AI ethics tools are typically used by decision-makers, such as elected officials and board members. There is typically little participation in the assessment and audit processes by traditionally marginalized groups, the users of the developed services, and vested interest stakeholders such as citizens, shareholders, and investors.

The paper emphasizes the relation between participation in AI ethics processes and power dynamics. The two are linked because participation has to do with who has the power to make decisions, who is invited to the table, and whose views and goals are prioritized. The paper recommends integrating a wider stakeholder base in AI ethics assessments and audits. It also recommends focusing the conversation on power relations rather than strictly on participation. Focusing on participation alone runs the risk of giving rise to “participation washing.”

A gap in auditing

Nearly all the AI ethics tools are for internal self-assessment only. There are generally no requirements or processes for publishing the outputs externally. The authors emphasize that external oversight is required for the trustworthiness of organizations developing the systems. Without robust oversight, there is a risk that organizations that develop AI systems would fall into a “checklist mentality” and would settle for performative gestures that constitute “ethics washing.”

Between the lines

This paper gives us language to talk about the different AI ethics tools that are out there. In doing so, it helps in understanding the complex landscape of AI ethics. The identification of the participation and auditing gaps invites the reader to seek solutions.

One topic for further exploration is which strategies are appropriate for external oversight in the case of AI ethics. It might be tempting to think of auditing processes familiar from the financial and other sectors. However, in the case of AI ethics, the participation from a wider stakeholder base in external oversight seems especially important given that ethical evaluations involve the values and the perspectives available to the evaluator. Can sufficient participation be introduced into familiar auditing processes, and if so, how? Alternatively, would it be better to design different oversight procedures for AI ethics? If so, what should they look like?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • A hunt for the Snark: Annotator Diversity in Data Practices

    A hunt for the Snark: Annotator Diversity in Data Practices

  • Research summary: Legal Risks of Adversarial Machine Learning Research

    Research summary: Legal Risks of Adversarial Machine Learning Research

  • Democratising AI: Multiple Meanings, Goals, and Methods

    Democratising AI: Multiple Meanings, Goals, and Methods

  • The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

    The Ethical AI Startup Ecosystem 01: An Overview of Ethical AI Startups

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

    The Grand Illusion: The Myth of Software Portability and Implications for ML Progress

  • Subreddit Links Drive Community Creation and User Engagement on Reddit

    Subreddit Links Drive Community Creation and User Engagement on Reddit

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.