• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Algorithmic accountability for the public sector

September 1, 2021

🔬 Research summary by Connor Wright, our Partnerships Manager.

[Original paper by Ada Lovelace Institute, AI Now Institute and Open Government Partnership]


Overview: The developments in AI are never-ending, and so is the need for policy regulation. The report exposes what has been implemented, their successes and failings while also presenting the emergence of two pivotal factors in any policy context. These are the importance of context and public participation.


Introduction

With AI developing at warp speed, what is the current situation in the algorithmic space? Do we know what works in terms of regulation? Due to the lack of policy and data about algorithmic regulation in the Global South, the paper adopts a European and North American focus. Nevertheless, this report aims to understand the success of algorithmic accountability policies from different actors’ perspectives. While exposing what has been attempted (alongside its successes and failures), two crucial factors emerged: public participation and context. The latter is where we are going to begin. 

Key Insights

The importance of context when implementing policy

The literature review conducted in the report showed that people understand algorithmic accountability but not so much about implementing it. Nevertheless, one key element in realising policy is the context in which it is deployed. The Canadian ADM directive requires any custom source code owned by the Government to be made public. Yet, the New Zealand Aotearoa NZ Algorithm Charter asks how the data was collected and stored to be made available. 

With this in mind, the effectiveness of the same policy can be drastically different in two different contexts. Hence, what has been implemented and what are the general problems with these approaches?

What has been attempted, and what are their faults?

In this section, I will list a broad overview of the policy methods carried out by different actors in the report and their associated problems.

High-level ethical policies: provide a helpful frame of reference to approach algorithmic issues.

Problem: doesn’t provide any form of obligation to specific actions.

Prohibitions and moratoria: prevent harmful technologies from being used entirely, or gives regulators time to catch up to their development. have also been attempted to be implemented. 

Problem: they rest on the assumptions that either the technology should never be used and that the policy and regulation efforts will be adequate in a couple of years.

Impact assessments: aim to expose how the agents have subjectively defined what harms, and risks are.

Problem: they struggle to provide clear avenues for public participation.

Audits: standardise and scrutinise the efforts being made to generate an environment of algorithmic accountability.

Problem: the company must provide adequate data to be audited, and the performance during auditing is the same as afterwards.

Oversight bodies: possibility of influencing the behaviour of prominent actors.

Problem: the influence may only be minute.

Appeals to human intervention: involving humans in the process to better ensure fairness and establish some form of responsibility. 

Problem: assumes that having a human in the process does help to ensure fairness and doesn’t acknowledge how algorithmic data can influence human decision making.

The role of the public

Given the last point of human intervention, the role of public intervention should not be underestimated. The intervention helps to match better governmental actions with the needs of the people.

What’s still noteworthy is how different people have varying resources that allow them to get involved. Here, access to the media can help level this playing field.

The role of the media

Legal frameworks don’t just rely on the law to be effective, but also on other factors such as “political will and cultural norms”. Pressure from media outlets can help to reinforce the need to implement and maintain policies beyond just their legally binding status. Such intervention can make the policies ‘societally binding’, fixing the need for communication between policymakers and the public.

Between the lines

For me, the key findings are the importance of the public and the context within policymaking. No longer can a ‘one size fits all’ attitude be adopted in the algorithmic space, bringing in the need for an appropriate scope. Regulating individual actors too closely can ignore the systemic and social pressures present. Adopting too broad a viewpoint can then generalise important peculiarities that need attention in different contexts. What’s for sure, in my eyes, is that while policy aims to serve the public, it must first learn from the public.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

    Research Summary: Risk Shifts in the Gig Economy: The Normative Case for an Insurance Scheme against...

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

    FaiRIR: Mitigating Exposure Bias from Related Item Recommendations in Two-Sided Platforms

  • Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

    Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

  • Exploring Clusters of Research in Three Areas of AI Safety

    Exploring Clusters of Research in Three Areas of AI Safety

  • The Ethics of Artificial Intelligence through the Lens of Ubuntu

    The Ethics of Artificial Intelligence through the Lens of Ubuntu

  • Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

    Lanfrica: A Participatory Approach to Documenting Machine Translation Research on African Languages ...

  • Research summary: Principles alone cannot guarantee ethical AI

    Research summary: Principles alone cannot guarantee ethical AI

  • Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

    Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

  • From the Gut? Questions on Artificial Intelligence and Music

    From the Gut? Questions on Artificial Intelligence and Music

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Š MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.