• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough

September 7, 2023

🔬 Research Summary by Dr. Henry Fraser, a Research Fellow in Law, Accountability, and Data Science at the Centre of Excellence for Automated Decision-Making and Society.

[Original paper by Henry Fraser and José-Miguel Bello y Villarino]


Overview: The European Union’s draft ‘AI Act’ aims to promote “trustworthy” AI with a proportionate regulatory burden. The final text of the Act is currently under negotiation between the European Commission, the European Parliament, and the Council of the European. This paper critically evaluates competing approaches to risk acceptability that are up for negotiation, explaining why any obligation to render risks from AI systems “acceptable” must be qualified by considering what is reasonable in all the circumstances.


Introduction

You are the developer of an AI system that will evaluate University applications throughout Europe. Under Article 9 of Europe’s draft AI Act, which may become law as early as 2024, you have an obligation to implement risk management because the system is “high-risk”.  Risk management must ensure that any remaining risks are “acceptable.” What does that even mean? How do you decide when risks from high-risk AI systems (with potential impacts on safety, rights, health, or the environment) are acceptable?

The final text of the Act (actually called a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence) is currently under negotiation between the three main branches of the European government: the Commission, the Council, and the Parliament. Among many other thorny issues for negotiation, negotiators choose between two competing approaches to risk acceptability. One approach, proposed by the European Commission, would require risks to be reduced “as far as possible” (AFAP) in design and development, with remaining risks subject to further mitigation. The Parliament, by contrast, proposes to introduce considerations of what is “reasonably” acceptable. 

This paper critically evaluates the two approaches, exploring how AFAP has been interpreted in other contexts and drawing on negligence and other laws to understand what “reasonably acceptable” risks might mean. It finds that the Parliament’s approach is more compatible with the AI Act’s overarching goals of promoting trustworthy AI with a proportionate regulatory burden. 

Key Insights

Why does risk acceptability matter?

Trustworthiness and proportionate regulatory burden are the AI Act’s two main goals. Because there are so many issues under consideration in negotiations about the Act – from the definition of AI to the responsibilities of foundation model developers – the approach to risk acceptability has mostly flown under the radar. That belies its importance. The rules about when risks are acceptable and when they do not determine how “trustworthy” AI systems really are and how much burden the AI Act will place on AI development.  

It’s a bad idea to require AI risks to be reduced or eliminated “as far as possible.”

A requirement to reduce risks as far as possible, which the Commission’s version of the Act contemplates, is exacting if taken literally. AI outputs are known to be “emergent” (unpredictable), and it is always possible to implement just one more measure to reduce risk. Our research shows that the European Commission has historically taken a very narrow approach to the AFAP risk criterion in the context of medical devices. The Commission went so far as to require a change to the ISO standard for medical device risk management, stating that in Europe, medical device risks had to be reduced as far as possible “without there being room for economic considerations.” Our survey of industry responses to this change indicated that such a narrow risk acceptability criterion created uncertainty on where to draw the line for risk management. It seemed to encourage businesses to conceal their cost-benefit analysis around risk management rather than disregard economic considerations. The same problems are likely to arise in the AI context.

It makes sense to factor in the costs and benefits of risk management when judging the acceptability of AI risks

The Parliament’s proposed approach to risk management for high-risk AI would introduce considerations of reasonableness, proportionality, and the impact of risk management on the potential benefits of the AI system into risk acceptability judgments. Drawing lessons from negligence law (par excellence about when risks are unacceptable) and medical device regulation, our paper explains how principles of reasonableness could allow AI developers to make more principled risk acceptability judgments. It would allow them to factor in various kinds of cost-benefit and risk-benefit analyses, including whether the cost of a given risk management measure is worth the risk reduction, whether risk management negatively impacts the overall benefit of an AI system, and whether risks are significant enough to warrant expenditure of finite risk management resources.

Between the lines

The choice between the stringent “as far as possible” risk acceptability criterion and the more flexible approach permitted by introducing reasonableness should be informed by the overall architecture of the AI Act and by the issues of public policy that are at stake. The Act contemplates that its requirements, including risk management, will be met through certification against technical standards – mostly self-certification. It also states that risk management should consider the “state of the art,” including as reflected in standards. In effect, this means that technical standards and the state of the art play the role of a pressure valve: once you meet the state of the art, you can say you’ve reduced a risk “as far as possible.” 

But why should it fall to technical standards bodies or to the big tech companies whose practices shape the state of the art to decide when risks to fundamental rights from AI are acceptable? It is not clear they have the expertise in human rights or the political legitimacy to exercise this kind of discretion over matters of public policy. 

The benefit of a reasonableness approach is that it brings all the trade-offs involved in risk acceptability judgments to the fore. It assumes value-laden judgments. Ultimately, the legitimacy of these judgments will need to be supported by input from stakeholders and affected groups and by guidance from regulators with the requisite expertise and legitimacy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

related posts

  • Implications of Distance over Redistricting Maps: Central and Outlier Maps

    Implications of Distance over Redistricting Maps: Central and Outlier Maps

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • International Institutions for Advanced AI

    International Institutions for Advanced AI

  • Ubuntu’s Implications for Philosophical Ethics

    Ubuntu’s Implications for Philosophical Ethics

  • LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

    LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language Models

  • Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

    Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

  • Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

    Can You Meaningfully Consent in Eight Seconds? Identifying Ethical Issues with Verbal Consent for Vo...

  • Foundations for the future: institution building for the purpose of artificial intelligence governan...

    Foundations for the future: institution building for the purpose of artificial intelligence governan...

  • Towards a Feminist Metaethics of AI

    Towards a Feminist Metaethics of AI

  • Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

    Let Users Decide: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.