• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Algorithmic accountability for the public sector

September 1, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ada Lovelace Institute, AI Now Institute and Open Government Partnership]


Overview: The developments in AI are never-ending, and so is the need for policy regulation. The report exposes what has been implemented, their successes and failings while also presenting the emergence of two pivotal factors in any policy context. These are the importance of context and public participation.


Introduction

With AI developing at warp speed, what is the current situation in the algorithmic space? Do we know what works in terms of regulation? Due to the lack of policy and data about algorithmic regulation in the Global South, the paper adopts a European and North American focus. Nevertheless, this report aims to understand the success of algorithmic accountability policies from different actors’ perspectives. While exposing what has been attempted (alongside its successes and failures), two crucial factors emerged: public participation and context. The latter is where we are going to begin. 

Key Insights

The importance of context when implementing policy

The literature review conducted in the report showed that people understand algorithmic accountability but not so much about implementing it. Nevertheless, one key element in realising policy is the context in which it is deployed. The Canadian ADM directive requires any custom source code owned by the Government to be made public. Yet, the New Zealand Aotearoa NZ Algorithm Charter asks how the data was collected and stored to be made available. 

With this in mind, the effectiveness of the same policy can be drastically different in two different contexts. Hence, what has been implemented and what are the general problems with these approaches?

What has been attempted, and what are their faults?

In this section, I will list a broad overview of the policy methods carried out by different actors in the report and their associated problems.

High-level ethical policies: provide a helpful frame of reference to approach algorithmic issues.

Problem: doesn’t provide any form of obligation to specific actions.

Prohibitions and moratoria: prevent harmful technologies from being used entirely, or gives regulators time to catch up to their development. have also been attempted to be implemented. 

Problem: they rest on the assumptions that either the technology should never be used and that the policy and regulation efforts will be adequate in a couple of years.

Impact assessments: aim to expose how the agents have subjectively defined what harms, and risks are.

Problem: they struggle to provide clear avenues for public participation.

Audits: standardise and scrutinise the efforts being made to generate an environment of algorithmic accountability.

Problem: the company must provide adequate data to be audited, and the performance during auditing is the same as afterwards.

Oversight bodies: possibility of influencing the behaviour of prominent actors.

Problem: the influence may only be minute.

Appeals to human intervention: involving humans in the process to better ensure fairness and establish some form of responsibility. 

Problem: assumes that having a human in the process does help to ensure fairness and doesn’t acknowledge how algorithmic data can influence human decision making.

The role of the public

Given the last point of human intervention, the role of public intervention should not be underestimated. The intervention helps to match better governmental actions with the needs of the people.

What’s still noteworthy is how different people have varying resources that allow them to get involved. Here, access to the media can help level this playing field.

The role of the media

Legal frameworks don’t just rely on the law to be effective, but also on other factors such as “political will and cultural norms”. Pressure from media outlets can help to reinforce the need to implement and maintain policies beyond just their legally binding status. Such intervention can make the policies ‘societally binding’, fixing the need for communication between policymakers and the public.

Between the lines

For me, the key findings are the importance of the public and the context within policymaking. No longer can a ‘one size fits all’ attitude be adopted in the algorithmic space, bringing in the need for an appropriate scope. Regulating individual actors too closely can ignore the systemic and social pressures present. Adopting too broad a viewpoint can then generalise important peculiarities that need attention in different contexts. What’s for sure, in my eyes, is that while policy aims to serve the public, it must first learn from the public.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

    Atomist or holist? A diagnosis and vision for more productive interdisciplinary AI ethics dialogue

  • A Critical Analysis of the What3Words Geocoding Algorithm

    A Critical Analysis of the What3Words Geocoding Algorithm

  • Research summary: Snapshot Series: Facial Recognition Technology

    Research summary: Snapshot Series: Facial Recognition Technology

  • Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

    Episodio 3 - Idoia Salazar: Sobre la Vital Importancia de Educar al Ciudadano en los Usos Responsabl...

  • Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

    Research summary: AI Governance: A Holistic Approach to Implement Ethics in AI

  • Setting the Right Expectations: Algorithmic Recourse Over Time

    Setting the Right Expectations: Algorithmic Recourse Over Time

  • The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

  • Post-Mortem Privacy 2.0: Theory, Law and Technology

    Post-Mortem Privacy 2.0: Theory, Law and Technology

  • Careless Whisper: Speech-to-text Hallucination Harms

    Careless Whisper: Speech-to-text Hallucination Harms

  • Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

    Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better S...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.