• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Putting AI ethics to work: are the tools fit for purpose?

November 16, 2021

🔬 Research summary by Ravit Dotan, who mainly works in epistemology, philosophy of science, and philosophy of machine learning. Her secondary areas of interest are feminist philosophy and philosophy of religion.

[Original paper by Jacqui Ayling and Adriane Chapman]


Overview: This paper maps the landscape of AI ethics tools: It develops a typology to classify AI ethics tools and analyzes the existing ones. In addition, the paper identifies two gaps. First, key stakeholders, including members of marginalized communities, under-participate in using AI ethics tools and their outputs. Second, there is a lack of tools for external auditing in AI ethics, which is a barrier to the accountability and trustworthiness of organizations that develop AI systems.


Introduction

As more and more AI ethics tools are developed, it becomes difficult to get a handle on the terrain. This paper addresses the challenge by mapping and analyzing the existing AI ethics tools (as of the end of 2020).

The authors conducted a thorough search and identified 169 AI ethics documents. Of those, 39 were found to include concrete AI ethics tools. Each of the 39 tools was classified using the following questions: (i) What sector are the document’s authors from? And what sector are the users of the tools from? (ii) Which stakeholders would either use the tool or engage with the results? (iii) What type of tool is it? Which strategy does it employ? (iv) Were these tools for use internally, or did they have external elements? (v) In which stage in the AI production and use chain was the tool used? (vi) Was the tool appropriate for addressing the model, data, or both?

The paper presents statistics characterizing the tools using these questions. Among its findings, the paper uncovers that a wide stakeholder base, involving customers, the broader public, and the environment, is typically not a part of AI ethics evaluation processes. Moreover, the paper finds that almost all AI ethics tools are used internally, without external oversight. The authors emphasize that these characteristics stand in the way of accountability and trustworthiness of organizations that develop AI systems.

Key Insights

The map of AI ethics tool landscape

The paper divides AI ethics tools into three categories:

  1. Impact assessment tools

Impact assessment is a fact-finding and evaluation process that precedes or accompanies the production of artifacts, systems, or research. Ex-ante assessments are used in the use case development and testing stages. Ex post assessments are used post-deployment, in the monitoring stage, to capture the impacts of the system. The most predominant tools for impact assessments in AI ethics are checklists and questionnaires.

  1. Technical and design tools

These tools are typically developed by the ML community. Some of them are computational, e.g., computationally identifying and mitigating bias. Others are design processes, e.g., workshop-style events for raising awareness in design teams or participatory design processes. These tools are used along the whole process and can facilitate impact assessment and auditing.

  1. Auditing tools

An audit is an examination of evidence of a process or activity against some standards or metrics. To ensure transparency and to be able to place liabilities, the auditing process needs to be independent of the assessment process and from the day-to-day management of the auditee. AI ethics auditing tools are used in the late stages of the production process, when testing and monitoring the AI system. The focus of these tools is on appropriate documentation for verification and assurance. Checklists are also used for auditing, but less so.

Some statistics: The paper finds that tools for AI ethics are developed mostly by the private and academic sectors. However, the private and the public sectors are the ones that mostly use the tools. The paper also finds that more tools are developed for the early stages of the production process, namely the use case and design stages. Overall, AI ethics tools focus more on addressing models, as opposed to addressing data.

A gap in stakeholder participation

Typically, AI ethics tools are directly used by those who develop the AI system (e.g., development, delivery, quality assurance). The outputs of the AI ethics tools are typically used by decision-makers, such as elected officials and board members. There is typically little participation in the assessment and audit processes by traditionally marginalized groups, the users of the developed services, and vested interest stakeholders such as citizens, shareholders, and investors.

The paper emphasizes the relation between participation in AI ethics processes and power dynamics. The two are linked because participation has to do with who has the power to make decisions, who is invited to the table, and whose views and goals are prioritized. The paper recommends integrating a wider stakeholder base in AI ethics assessments and audits. It also recommends focusing the conversation on power relations rather than strictly on participation. Focusing on participation alone runs the risk of giving rise to “participation washing.”

A gap in auditing

Nearly all the AI ethics tools are for internal self-assessment only. There are generally no requirements or processes for publishing the outputs externally. The authors emphasize that external oversight is required for the trustworthiness of organizations developing the systems. Without robust oversight, there is a risk that organizations that develop AI systems would fall into a “checklist mentality” and would settle for performative gestures that constitute “ethics washing.”

Between the lines

This paper gives us language to talk about the different AI ethics tools that are out there. In doing so, it helps in understanding the complex landscape of AI ethics. The identification of the participation and auditing gaps invites the reader to seek solutions.

One topic for further exploration is which strategies are appropriate for external oversight in the case of AI ethics. It might be tempting to think of auditing processes familiar from the financial and other sectors. However, in the case of AI ethics, the participation from a wider stakeholder base in external oversight seems especially important given that ethical evaluations involve the values and the perspectives available to the evaluator. Can sufficient participation be introduced into familiar auditing processes, and if so, how? Alternatively, would it be better to design different oversight procedures for AI ethics? If so, what should they look like?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

  • Meet the inaugural cohort of the MAIEI Summer Research Internship!

    Meet the inaugural cohort of the MAIEI Summer Research Internship!

  • Promises and Challenges of Causality for Ethical Machine Learning

    Promises and Challenges of Causality for Ethical Machine Learning

  • AI and the Global South: Designing for Other Worlds  (Research Summary)

    AI and the Global South: Designing for Other Worlds (Research Summary)

  • Sharing Space in Conversational AI

    Sharing Space in Conversational AI

  • Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

    Confucius, cyberpunk and Mr. Science: comparing AI ethics principles between China and the EU

  • Research summary: Decision Points in AI Governance

    Research summary: Decision Points in AI Governance

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • De-platforming disinformation: conspiracy theories and their control

    De-platforming disinformation: conspiracy theories and their control

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.