• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Ethics-based auditing of automated decision-making systems: intervention points and policy implications

November 5, 2021

šŸ”¬ Research summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.

[Original paper by Jakob Mökander and Maria Axente]


Overview: Ā The government mechanisms currently used to oversee human decision-making often fail when applied to automated decision-making systems (ā€œADMSā€). In this paper, the researchers propose the feasibility and effectiveness of ethics-based auditing (ā€œEBAā€) as a ā€˜soft’ yet ā€˜formal’ governance mechanism to regulate ADMS and also discuss the policy implications of their findings.


Introduction

We are aware of the ethical hazards associated with ADMS, which are in fact, well-documented. In such a scenario, the capacity to address and mitigate these ethical risks posed by ADMS is essential for good governance. This paper, keeping aside the underlying technologies powering ADMS, focuses on its features, for e.g., autonomy, adaptability and scalability that underpin both its socially beneficial and ethically challenging uses. In fact, it narrows down its focus on to how organizations can develop and implement effective EBA procedures in practice. While the analysis suggest that EBA is subject to a range of conceptual, technical, economic, legal and institutional constraints, the researchers nevertheless conclude that, EBA should be considered as an integral component of multi-faced approaches to managing the ethical risks posed by ADMS.

EBA: What is it?

The emphasis of this paper is entirely on EBA, which is functionally understood as a governance mechanism that helps organizations operationalize their ethical commitments. It concerns with what ought and ought not to be done over and above existing regulation. Operationally, EBA is characterized by a structured process whereby an entity’s present or past behavior is assessed for consistency with a pre-defined set of principles. Throughout this process, various tools and methods such as software programmes, stakeholder consultation etc. are employed to verify claims and create documentation. In fact, different EBA procedures employ different tools and contain different steps. However, an EBA differs from simply publishing a code of conduct since its main activity consists of demonstrating adherence to a pre-defined standard. The paper also emphasizes on how organizations can develop and implement effective EBA procedures in practice instead of concentrating only on what EBA is and why it is needed. The objective is twofold. First, the researchersĀ  seek to identify the intervention points, both in organizational governance as well as in the software development lifecycle, at which EBA can help inform ethical deliberation and thereby make a positive difference to the ways in which ADMS are designed and deployed. Second, they seek to contribute to an understanding of how policymakers and regulators can facilitate and support the implementation of EBA procedures in organizations that develop ADMS.

EBA: Different approaches

The paper distinguishes between different approaches for EBA, such as functionality audits, which for example, focuses on the rationale behind decisions. In contrast, code audits entail reviewing the source code of an algorithm. Finally, impact auditing investigates the types, severity, and prevalence of effects of an algorithm’s outputs. These approaches are complementary and can be combined into holistic EBA procedures. According to the researchers, since autonomous and self-learning ADMS may evolve and adapt over time as they interact with their environments, EBA needs to include at least the elements of continuous, real-time monitoring i.e. impact auditing.

Governing STS and identifying intervention points for EBA

The paper then dwells upon Socio-Technical Systems (STS), which comprises both social entities, like people and organizations, and technical entities, like tools, infrastructures, and processes. ADMS, then, refers to technical systems that encompass decision-making models, algorithms that translate models into computable code, as well as methods to acquire and process input data. Further, ADMS interact with the entire political and economic environment surrounding their use. The paper then, goes on to analyze how complex STS are governed today and discusses how EBA procedures can be designed to complement and enhance existing governance structures. Governance consists of both hard and soft aspects. Hard governance mechanisms are systems of rules elaborated and enforced through institutions to govern the behavior of agents. When considering ADMS, examples of hard governance mechanisms range from legal restrictions on system outputs to outright prohibition of the use of ADMS for specific applications. Soft governance, on the other hand, embodies mechanisms that abide by the prescriptions of hard governance while exhibiting some degree of contextual flexibility. A further distinction is also made between formal and informal governance mechanisms, where formal governance mechanisms refer to official communications. The researchers go on to advocate EBA as a soft yet formal governance mechanism to complement and strengthen the congruence of existing governance structures within organizations that develop and use ADMS. Further, the paper looks at some of the potential intervention points (points at which decisions, actions, or activities are likely to shape the design and behavior of ADMS) at which EBA can help shape the design and deployment of ethical ADMS by informing ethical deliberation. They are as follows;

  • value and vision statement ;
  • principles and codes of conduct;
  • ethics boards and review committees;
  • stakeholder consultation;
  • employee education and training;
  • performance criteria and incentives;
  • reporting channels;
  • product development;
  • product deployment and redesign;
  • periodic audits; andĀ 
  • monitoring of outputs

Recommendations to policymakers

The paper not only identifies limitations and risks associated with EBA but also discusses how policymakers and regulators can facilitate the adoption of EBA by organizations that design and deploy ADMS. According to the researchers, the organizations that design and deploy ADMS have good reasons to subject themselves and the systems they operate to EBA. For example, ensuring the ethical alignment of ADMS would help organizations manage financial and legal risks, help them gain competitive advantage etc. In fact, the documentation and communication of the steps taken to ensure that ADMS are ethical can play a positive role in both marketing and public relations.

The paper also highlights eight policy recommendations for policymakers and regulators to follow;

  • Help provide working definitions for ADMS – regulators shall define for organizations the material scope for EBA by providing working definitions or risk classifications of ADMS that enable proportionate and progressive governance; 
  • Provide guidance on how to resolve tensions – when designing and operating ADMS, conflicts may arise between different ethical principles such as fairness, privacy etc., for which there are no fixed solutions. In such a scenario, regulators shall provide guidance on how to resolve tensions between such conflicting values in different situations;
  • Support the creation of standardized evaluation matrices and reporting formats – while organizations should be free to adopt different EBA procedures, regulators can also support the creation of standardized evaluation metrics and reporting formats;
  • Facilitate knowledge sharing and communication of best practices – regulators can not only provide digital platforms where software code and data could be shared but also create forums where stakeholders could discuss and share best practices for EBA of ADMS;
  • Create an independent body to oversee EBA of ADMS – create an independent body that authorizes organizations who, in turn, conduct EBA of, or issue ethics-based certifications for, ADMS;
  • Create incentives for voluntary adoption of EBA – implementing EBA across organizations would involve costs. Therefore, to incentivize the voluntary adoption of EBA, regulators should encourage and reward demonstrable achievements;
  • Promote trust through transparency and accountability – regulators can strengthen trust in emerging EBA procedures by ensuring accountability, e.g., by imposing sanctions where trust is breached; and 

Provide governmental leadership – political leaders can help strengthen the feasibility and effectiveness of EBA as a governance mechanism by explaining and endorsing it.Therefore, in order to demonstrate their commitment to officially stated policies, governments can consider conducting EBA of ADMS employed in the public sector and include ethics-based criteria in the public procurement of ADMS.

Between the lines

This paper provides a very holistic and process-oriented approach to EBA. In fact, many of the intervention points listed in the paper already exist within organizations that design and deploy ADMS. Hence, implementing EBA would not entail imposition of any additional layers of governance upon them. It is pertinent to mention here that the key to developing feasible and effective EBA procedures is to combine existing conceptual frameworks into structured processes that monitor each phase of the ADMS lifecycle to identify and correct the points at which ethical failures may occur. To sum up, the recommendations delineated in this paper would definitely go a long way in mitigating some of the ethical hazards posed by ADMS.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

    From Instructions to Intrinsic Human Values - A Survey of Alignment Goals for Big Models

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinfo...

    Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinfo...

  • The Ethics of Emotion in AI Systems (Research Summary)

    The Ethics of Emotion in AI Systems (Research Summary)

  • Fair Generative Model Via Transfer Learning

    Fair Generative Model Via Transfer Learning

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

  • The Impact of Artificial Intelligence on Military Defence and Security

    The Impact of Artificial Intelligence on Military Defence and Security

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.