• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Algorithmic accountability for the public sector

September 1, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ada Lovelace Institute, AI Now Institute and Open Government Partnership]


Overview: The developments in AI are never-ending, and so is the need for policy regulation. The report exposes what has been implemented, their successes and failings while also presenting the emergence of two pivotal factors in any policy context. These are the importance of context and public participation.


Introduction

With AI developing at warp speed, what is the current situation in the algorithmic space? Do we know what works in terms of regulation? Due to the lack of policy and data about algorithmic regulation in the Global South, the paper adopts a European and North American focus. Nevertheless, this report aims to understand the success of algorithmic accountability policies from different actors’ perspectives. While exposing what has been attempted (alongside its successes and failures), two crucial factors emerged: public participation and context. The latter is where we are going to begin. 

Key Insights

The importance of context when implementing policy

The literature review conducted in the report showed that people understand algorithmic accountability but not so much about implementing it. Nevertheless, one key element in realising policy is the context in which it is deployed. The Canadian ADM directive requires any custom source code owned by the Government to be made public. Yet, the New Zealand Aotearoa NZ Algorithm Charter asks how the data was collected and stored to be made available. 

With this in mind, the effectiveness of the same policy can be drastically different in two different contexts. Hence, what has been implemented and what are the general problems with these approaches?

What has been attempted, and what are their faults?

In this section, I will list a broad overview of the policy methods carried out by different actors in the report and their associated problems.

High-level ethical policies: provide a helpful frame of reference to approach algorithmic issues.

Problem: doesn’t provide any form of obligation to specific actions.

Prohibitions and moratoria: prevent harmful technologies from being used entirely, or gives regulators time to catch up to their development. have also been attempted to be implemented. 

Problem: they rest on the assumptions that either the technology should never be used and that the policy and regulation efforts will be adequate in a couple of years.

Impact assessments: aim to expose how the agents have subjectively defined what harms, and risks are.

Problem: they struggle to provide clear avenues for public participation.

Audits: standardise and scrutinise the efforts being made to generate an environment of algorithmic accountability.

Problem: the company must provide adequate data to be audited, and the performance during auditing is the same as afterwards.

Oversight bodies: possibility of influencing the behaviour of prominent actors.

Problem: the influence may only be minute.

Appeals to human intervention: involving humans in the process to better ensure fairness and establish some form of responsibility. 

Problem: assumes that having a human in the process does help to ensure fairness and doesn’t acknowledge how algorithmic data can influence human decision making.

The role of the public

Given the last point of human intervention, the role of public intervention should not be underestimated. The intervention helps to match better governmental actions with the needs of the people.

What’s still noteworthy is how different people have varying resources that allow them to get involved. Here, access to the media can help level this playing field.

The role of the media

Legal frameworks don’t just rely on the law to be effective, but also on other factors such as ā€œpolitical will and cultural normsā€. Pressure from media outlets can help to reinforce the need to implement and maintain policies beyond just their legally binding status. Such intervention can make the policies ā€˜societally binding’, fixing the need for communication between policymakers and the public.

Between the lines

For me, the key findings are the importance of the public and the context within policymaking. No longer can a ā€˜one size fits all’ attitude be adopted in the algorithmic space, bringing in the need for an appropriate scope. Regulating individual actors too closely can ignore the systemic and social pressures present. Adopting too broad a viewpoint can then generalise important peculiarities that need attention in different contexts. What’s for sure, in my eyes, is that while policy aims to serve the public, it must first learn from the public.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

    How Cognitive Biases Affect XAI-assisted Decision-making: A Systematic Review

  • Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

    Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

    The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines

  • The Moral Machine Experiment on Large Language Models

    The Moral Machine Experiment on Large Language Models

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

  • Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

    Algorithmic Harms in Child Welfare: Uncertainties in Practice, Organization, and Street-level Decisi...

  • Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

    Research summary: Beyond a Human Rights Based Approach To AI Governance: Promise, Pitfalls and Plea

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.