• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Algorithmic accountability for the public sector

September 1, 2021

šŸ”¬ Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ada Lovelace Institute, AI Now Institute and Open Government Partnership]


Overview: The developments in AI are never-ending, and so is the need for policy regulation. The report exposes what has been implemented, their successes and failings while also presenting the emergence of two pivotal factors in any policy context. These are the importance of context and public participation.


Introduction

With AI developing at warp speed, what is the current situation in the algorithmic space? Do we know what works in terms of regulation? Due to the lack of policy and data about algorithmic regulation in the Global South, the paper adopts a European and North American focus. Nevertheless, this report aims to understand the success of algorithmic accountability policies from different actors’ perspectives. While exposing what has been attempted (alongside its successes and failures), two crucial factors emerged: public participation and context. The latter is where we are going to begin. 

Key Insights

The importance of context when implementing policy

The literature review conducted in the report showed that people understand algorithmic accountability but not so much about implementing it. Nevertheless, one key element in realising policy is the context in which it is deployed. The Canadian ADM directive requires any custom source code owned by the Government to be made public. Yet, the New Zealand Aotearoa NZ Algorithm Charter asks how the data was collected and stored to be made available. 

With this in mind, the effectiveness of the same policy can be drastically different in two different contexts. Hence, what has been implemented and what are the general problems with these approaches?

What has been attempted, and what are their faults?

In this section, I will list a broad overview of the policy methods carried out by different actors in the report and their associated problems.

High-level ethical policies: provide a helpful frame of reference to approach algorithmic issues.

Problem: doesn’t provide any form of obligation to specific actions.

Prohibitions and moratoria: prevent harmful technologies from being used entirely, or gives regulators time to catch up to their development. have also been attempted to be implemented. 

Problem: they rest on the assumptions that either the technology should never be used and that the policy and regulation efforts will be adequate in a couple of years.

Impact assessments: aim to expose how the agents have subjectively defined what harms, and risks are.

Problem: they struggle to provide clear avenues for public participation.

Audits: standardise and scrutinise the efforts being made to generate an environment of algorithmic accountability.

Problem: the company must provide adequate data to be audited, and the performance during auditing is the same as afterwards.

Oversight bodies: possibility of influencing the behaviour of prominent actors.

Problem: the influence may only be minute.

Appeals to human intervention: involving humans in the process to better ensure fairness and establish some form of responsibility. 

Problem: assumes that having a human in the process does help to ensure fairness and doesn’t acknowledge how algorithmic data can influence human decision making.

The role of the public

Given the last point of human intervention, the role of public intervention should not be underestimated. The intervention helps to match better governmental actions with the needs of the people.

What’s still noteworthy is how different people have varying resources that allow them to get involved. Here, access to the media can help level this playing field.

The role of the media

Legal frameworks don’t just rely on the law to be effective, but also on other factors such as ā€œpolitical will and cultural normsā€. Pressure from media outlets can help to reinforce the need to implement and maintain policies beyond just their legally binding status. Such intervention can make the policies ā€˜societally binding’, fixing the need for communication between policymakers and the public.

Between the lines

For me, the key findings are the importance of the public and the context within policymaking. No longer can a ā€˜one size fits all’ attitude be adopted in the algorithmic space, bringing in the need for an appropriate scope. Regulating individual actors too closely can ignore the systemic and social pressures present. Adopting too broad a viewpoint can then generalise important peculiarities that need attention in different contexts. What’s for sure, in my eyes, is that while policy aims to serve the public, it must first learn from the public.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • The State of AI Ethics Report (Oct 2020)

    The State of AI Ethics Report (Oct 2020)

  • Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ā€˜Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

  • Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

    Contextualizing Artificially Intelligent Morality: A Meta-Ethnography of Top-Down, Bottom-Up, and Hy...

  • The importance of audit in AI governance

    The importance of audit in AI governance

  • Ethics as a service: a pragmatic operationalisation of AI Ethics

    Ethics as a service: a pragmatic operationalisation of AI Ethics

  • Computer Vision’s implications for human autonomy

    Computer Vision’s implications for human autonomy

  • The Narrow Depth and Breadth of Corporate Responsible AI Research

    The Narrow Depth and Breadth of Corporate Responsible AI Research

  • AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

    AI in the Gray: Exploring Moderation Policies in Dialogic Large Language Models vs. Human Answers in...

  • Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

    Down the Toxicity Rabbit Hole: Investigating PaLM 2 Guardrails

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.