• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough

September 7, 2023

🔬 Research Summary by Dr. Henry Fraser, a Research Fellow in Law, Accountability, and Data Science at the Centre of Excellence for Automated Decision-Making and Society.

[Original paper by Henry Fraser and José-Miguel Bello y Villarino]


Overview: The European Union’s draft ‘AI Act’ aims to promote “trustworthy” AI with a proportionate regulatory burden. The final text of the Act is currently under negotiation between the European Commission, the European Parliament, and the Council of the European. This paper critically evaluates competing approaches to risk acceptability that are up for negotiation, explaining why any obligation to render risks from AI systems “acceptable” must be qualified by considering what is reasonable in all the circumstances.


Introduction

You are the developer of an AI system that will evaluate University applications throughout Europe. Under Article 9 of Europe’s draft AI Act, which may become law as early as 2024, you have an obligation to implement risk management because the system is “high-risk”.  Risk management must ensure that any remaining risks are “acceptable.” What does that even mean? How do you decide when risks from high-risk AI systems (with potential impacts on safety, rights, health, or the environment) are acceptable?

The final text of the Act (actually called a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence) is currently under negotiation between the three main branches of the European government: the Commission, the Council, and the Parliament. Among many other thorny issues for negotiation, negotiators choose between two competing approaches to risk acceptability. One approach, proposed by the European Commission, would require risks to be reduced “as far as possible” (AFAP) in design and development, with remaining risks subject to further mitigation. The Parliament, by contrast, proposes to introduce considerations of what is “reasonably” acceptable. 

This paper critically evaluates the two approaches, exploring how AFAP has been interpreted in other contexts and drawing on negligence and other laws to understand what “reasonably acceptable” risks might mean. It finds that the Parliament’s approach is more compatible with the AI Act’s overarching goals of promoting trustworthy AI with a proportionate regulatory burden. 

Key Insights

Why does risk acceptability matter?

Trustworthiness and proportionate regulatory burden are the AI Act’s two main goals. Because there are so many issues under consideration in negotiations about the Act – from the definition of AI to the responsibilities of foundation model developers – the approach to risk acceptability has mostly flown under the radar. That belies its importance. The rules about when risks are acceptable and when they do not determine how “trustworthy” AI systems really are and how much burden the AI Act will place on AI development.  

It’s a bad idea to require AI risks to be reduced or eliminated “as far as possible.”

A requirement to reduce risks as far as possible, which the Commission’s version of the Act contemplates, is exacting if taken literally. AI outputs are known to be “emergent” (unpredictable), and it is always possible to implement just one more measure to reduce risk. Our research shows that the European Commission has historically taken a very narrow approach to the AFAP risk criterion in the context of medical devices. The Commission went so far as to require a change to the ISO standard for medical device risk management, stating that in Europe, medical device risks had to be reduced as far as possible “without there being room for economic considerations.” Our survey of industry responses to this change indicated that such a narrow risk acceptability criterion created uncertainty on where to draw the line for risk management. It seemed to encourage businesses to conceal their cost-benefit analysis around risk management rather than disregard economic considerations. The same problems are likely to arise in the AI context.

It makes sense to factor in the costs and benefits of risk management when judging the acceptability of AI risks

The Parliament’s proposed approach to risk management for high-risk AI would introduce considerations of reasonableness, proportionality, and the impact of risk management on the potential benefits of the AI system into risk acceptability judgments. Drawing lessons from negligence law (par excellence about when risks are unacceptable) and medical device regulation, our paper explains how principles of reasonableness could allow AI developers to make more principled risk acceptability judgments. It would allow them to factor in various kinds of cost-benefit and risk-benefit analyses, including whether the cost of a given risk management measure is worth the risk reduction, whether risk management negatively impacts the overall benefit of an AI system, and whether risks are significant enough to warrant expenditure of finite risk management resources.

Between the lines

The choice between the stringent “as far as possible” risk acceptability criterion and the more flexible approach permitted by introducing reasonableness should be informed by the overall architecture of the AI Act and by the issues of public policy that are at stake. The Act contemplates that its requirements, including risk management, will be met through certification against technical standards – mostly self-certification. It also states that risk management should consider the “state of the art,” including as reflected in standards. In effect, this means that technical standards and the state of the art play the role of a pressure valve: once you meet the state of the art, you can say you’ve reduced a risk “as far as possible.” 

But why should it fall to technical standards bodies or to the big tech companies whose practices shape the state of the art to decide when risks to fundamental rights from AI are acceptable? It is not clear they have the expertise in human rights or the political legitimacy to exercise this kind of discretion over matters of public policy. 

The benefit of a reasonableness approach is that it brings all the trade-offs involved in risk acceptability judgments to the fore. It assumes value-laden judgments. Ultimately, the legitimacy of these judgments will need to be supported by input from stakeholders and affected groups and by guidance from regulators with the requisite expertise and legitimacy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • Embedding Values in Artificial Intelligence (AI) Systems

    Embedding Values in Artificial Intelligence (AI) Systems

  • SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

    SHADES: Towards a Multilingual Assessment of Stereotypes in Large Language Models

  • The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

    The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

  • Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

    Incentivized Symbiosis: A Paradigm for Human-Agent Coevolution

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • Research Summary: Explaining and Harnessing Adversarial Examples

    Research Summary: Explaining and Harnessing Adversarial Examples

  • Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

    Research summary: Acting the Part: Examining Information Operations Within #BlackLivesMatter Discour...

  • Towards Sustainable Conversational AI

    Towards Sustainable Conversational AI

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.