• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The Algorithm Audit: Scoring the Algorithms That Score Us (Research Summary)

March 1, 2021

🔬 Research summary contributed by Dr. Andrea Pedeferri, instructional designer and leader in higher ed (Faculty at Union College), and founder at Logica, helping learners become more efficient thinkers.

[Link to original paper + authors at the bottom]


Overview: Is it right for an AI to decide who can get bail and who can’t? This paper proposes a general model for an algorithm audit that is able to provide clear and effective results while also avoiding some of the drawbacks of the approaches offered so far. The model involves ethical analysis of algorithms into a set of practical steps and deliverables.


Is it right for an AI to decide who can get bail and who can’t? Or that it can approve or disapprove your loan application, or your job application? Should we trust an AI as we would trust a fellow human making decisions? Those are just a few examples of ethical questions that stem from the widespread application of algorithms in many decision-making processes and activities. This growth of AI in replacing humans has triggered an arms race to provide capable and efficient AI evaluations.  

One viable way to provide guidance and evaluations in these settings is the use of third-party audits. As audits are widespread in the evaluation of decision-making processes and procedures that are wholly or mostly human-centred (think of financial audits, for instance), it is natural to refer to the audit process when we look for ways of providing an ethical assessment of AI’s algorithms.

In their article “The algorithm audit: Scoring the algorithms that score us”, Shea Brown, Jovana Davidovic and Ali Hasan propose a general model for an algorithm audit that is able to provide clear and effective results while also avoiding some of the drawbacks of the approaches offered so far.

When we look at regulators, their primary interest is to assess “the algorithm’s negative impact on the rights and interests of stakeholders, with a corresponding identification of situations and/or features of the algorithm that give rise to these negative impacts.” The authors note however that “recently much criticism has been directed at early attempts to provide an ethical analysis of algorithms. Scholars have argued that using the classical analytic approach that over-stresses technical aspects of algorithms and ignores the larger socio-technical power dynamics has resulted in ethical approaches to algorithms that ignore or marginalize some of the primary threats (especially decision-making and classification) that algorithms pose to minorities.” 

In their paper, the authors thus aim to provide a more comprehensive framework for algorithm audit that avoids those shortcomings by modelling ethical analysis of algorithms into a set of “practical steps” and deliverables that can be both broadly applied and used by a variety of different stakeholders.

In particular, they focus on what they believe to be a critical point that has been mostly overlooked by current ethical audits: the context of the algorithm. By that, they mean the sociological and technical environments within which the algorithm is employed. This includes a broad and wide range of processes, settings, and dynamics that go beyond the technical aspects of the algorithm itself but affect all the relevant situations and stakeholders that fall within the algorithm’s range of functioning and applications. The authors provide as an example the case of algorithms about loan risk: “the negative impacts of a loan risk tool do not simply depend on whether the algorithm is statistically biased against, for example, some minority group because of biased training data; more importantly, the harm emerges from the way a loan officer decides to use that loan risk tool in her decision whether to give out a loan to an applicant”.

So, focusing on the context allows us to create more precise and relevant metrics about specific features for specific stakeholders’ interests. The proposed auditing tool is built from those metrics; this is why, according to the authors, it is essential that a careful analysis of the context be the primary step in the audit.

The actual framework proposed in the paper consists of “(1) a comprehensive list of relevant stake-holder interests and (2) a measure of the algorithm’s potentially ethically relevant attributes (metrics). A clear description of the context is needed both to generate a list of stakeholder interests (1) and to evaluate the key features of the algorithm, i.e. metrics (2). Once steps (1) and (2) are completed we can (3) evaluate the relevance of a good or bad performance of an algorithm on some metric for each stakeholder interest. We can then use the metrics score (2) and the relevancy score (3) to determine the impact of the algorithm on stakeholder interests.” 

With these basics points in mind, the authors go on to provide a fine-grained description of the major elements that should be considered in an ethical audit by first focusing on the stakeholder interests and rights. This is followed by a thorough elucidation of the different types of metrics to be taken into consideration for different categories such as bias, effectiveness, transparency, direct impacts and security & access.     

The most common present audit processes are often limited to the logical and mathematical operations that compose the input-output function of the algorithm. In the proposed structure, an algorithm should also include the “larger socio-technical structure that surrounds this function”. So, for example, in a description (by use of metrics) of an algorithm, we should include “facts about how the output of the function is used in decision making, and whether the actions taken are done so autonomously or with a human in the loop”. The key metrics are then the “ethically salient features” of the algorithm in the relevant context. An auditor should be able to test and provide assessments (that can be of different kinds such as narrative, numerical or categorical) for each metric in an objective way, that is, by assessing each metric independently “of any of the other metrics and independently of stakeholder interests”.

The key feature for the auditing method presented in the papers is the “Relevancy matrix”. Here is what the authors mean by that. The two necessary components of the auditing process are knowing what are the stakeholder interests that could be affected in a specific context by the algorithm and knowing all the metric scoring of the algorithm itself. Although those two aspects independently provide much useful information for an overall ethical assessment of the algorithm, their connection allows a full and complete picture that is necessary to produce meaningful and effective audit results. The idea is to be able to answer the following: “for each stakeholder interest, how much could each metric threaten that interest if the algorithm performs poorly with respect to that metric?”.

The solution proposed is to build a relevancy matrix that connects each interest to each metric. This bi-dimensional matrix produces a sort of still-picture that captures and identifies low-scoring metrics that have high relevance to stakeholder interests. The auditor is therefore equipped with a powerful tool that highlights areas of the potential negative impact of the algorithm. Moreover, the narrative assessment that comes with the metrics is a powerful resource to help produce strategies in order to alleviate those potential risks.

The central focus on context-dependency and the effective deliverables represented by the relevancy matrix allow this new approach to ethical audits to account for the concerns on obsessing over technical aspects of algorithms and ignoring “the larger socio-technical power dynamics” part of the larger context that surrounds algorithms. According to the authors, this narrow vision of audit has so far resulted in incomplete or incoherent approaches to ethical evaluations of algorithms that miss or leave space to hazards that can have a big impact, especially on minorities. Brown, Davidovic and Hasan believe that the model presented in their paper can correct those flaws while “staying within the constraints of what a genuine audit can do, which is to provide a consistent and repeatable assessment of (in this case) algorithms”.


Original paper by Shea Brown, Jovana Davidovic, Ali Hasan: https://journals.sagepub.com/doi/full/10.1177/2053951720983865

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments

    The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments

  • The Future of Teaching Tech Ethics

    The Future of Teaching Tech Ethics

  • Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

    Research summary: Social Biases in NLP Models as Barriers for Persons with Disabilities

  • Code Work: Thinking with the System in Mexico

    Code Work: Thinking with the System in Mexico

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • AI Ethics: Inclusivity in Smart Cities

    AI Ethics: Inclusivity in Smart Cities

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • Research summary: Classical Ethics in A/IS

    Research summary: Classical Ethics in A/IS

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.