• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How Much Risk Management Is Enough

September 7, 2023

🔬 Research Summary by Dr. Henry Fraser, a Research Fellow in Law, Accountability, and Data Science at the Centre of Excellence for Automated Decision-Making and Society.

[Original paper by Henry Fraser and José-Miguel Bello y Villarino]


Overview: The European Union’s draft ‘AI Act’ aims to promote “trustworthy” AI with a proportionate regulatory burden. The final text of the Act is currently under negotiation between the European Commission, the European Parliament, and the Council of the European. This paper critically evaluates competing approaches to risk acceptability that are up for negotiation, explaining why any obligation to render risks from AI systems “acceptable” must be qualified by considering what is reasonable in all the circumstances.


Introduction

You are the developer of an AI system that will evaluate University applications throughout Europe. Under Article 9 of Europe’s draft AI Act, which may become law as early as 2024, you have an obligation to implement risk management because the system is “high-risk”.  Risk management must ensure that any remaining risks are “acceptable.” What does that even mean? How do you decide when risks from high-risk AI systems (with potential impacts on safety, rights, health, or the environment) are acceptable?

The final text of the Act (actually called a Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence) is currently under negotiation between the three main branches of the European government: the Commission, the Council, and the Parliament. Among many other thorny issues for negotiation, negotiators choose between two competing approaches to risk acceptability. One approach, proposed by the European Commission, would require risks to be reduced “as far as possible” (AFAP) in design and development, with remaining risks subject to further mitigation. The Parliament, by contrast, proposes to introduce considerations of what is “reasonably” acceptable. 

This paper critically evaluates the two approaches, exploring how AFAP has been interpreted in other contexts and drawing on negligence and other laws to understand what “reasonably acceptable” risks might mean. It finds that the Parliament’s approach is more compatible with the AI Act’s overarching goals of promoting trustworthy AI with a proportionate regulatory burden. 

Key Insights

Why does risk acceptability matter?

Trustworthiness and proportionate regulatory burden are the AI Act’s two main goals. Because there are so many issues under consideration in negotiations about the Act – from the definition of AI to the responsibilities of foundation model developers – the approach to risk acceptability has mostly flown under the radar. That belies its importance. The rules about when risks are acceptable and when they do not determine how “trustworthy” AI systems really are and how much burden the AI Act will place on AI development.  

It’s a bad idea to require AI risks to be reduced or eliminated “as far as possible.”

A requirement to reduce risks as far as possible, which the Commission’s version of the Act contemplates, is exacting if taken literally. AI outputs are known to be “emergent” (unpredictable), and it is always possible to implement just one more measure to reduce risk. Our research shows that the European Commission has historically taken a very narrow approach to the AFAP risk criterion in the context of medical devices. The Commission went so far as to require a change to the ISO standard for medical device risk management, stating that in Europe, medical device risks had to be reduced as far as possible “without there being room for economic considerations.” Our survey of industry responses to this change indicated that such a narrow risk acceptability criterion created uncertainty on where to draw the line for risk management. It seemed to encourage businesses to conceal their cost-benefit analysis around risk management rather than disregard economic considerations. The same problems are likely to arise in the AI context.

It makes sense to factor in the costs and benefits of risk management when judging the acceptability of AI risks

The Parliament’s proposed approach to risk management for high-risk AI would introduce considerations of reasonableness, proportionality, and the impact of risk management on the potential benefits of the AI system into risk acceptability judgments. Drawing lessons from negligence law (par excellence about when risks are unacceptable) and medical device regulation, our paper explains how principles of reasonableness could allow AI developers to make more principled risk acceptability judgments. It would allow them to factor in various kinds of cost-benefit and risk-benefit analyses, including whether the cost of a given risk management measure is worth the risk reduction, whether risk management negatively impacts the overall benefit of an AI system, and whether risks are significant enough to warrant expenditure of finite risk management resources.

Between the lines

The choice between the stringent “as far as possible” risk acceptability criterion and the more flexible approach permitted by introducing reasonableness should be informed by the overall architecture of the AI Act and by the issues of public policy that are at stake. The Act contemplates that its requirements, including risk management, will be met through certification against technical standards – mostly self-certification. It also states that risk management should consider the “state of the art,” including as reflected in standards. In effect, this means that technical standards and the state of the art play the role of a pressure valve: once you meet the state of the art, you can say you’ve reduced a risk “as far as possible.” 

But why should it fall to technical standards bodies or to the big tech companies whose practices shape the state of the art to decide when risks to fundamental rights from AI are acceptable? It is not clear they have the expertise in human rights or the political legitimacy to exercise this kind of discretion over matters of public policy. 

The benefit of a reasonableness approach is that it brings all the trade-offs involved in risk acceptability judgments to the fore. It assumes value-laden judgments. Ultimately, the legitimacy of these judgments will need to be supported by input from stakeholders and affected groups and by guidance from regulators with the requisite expertise and legitimacy.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • The AI Carbon Footprint and Responsibilities of AI Scientists

    The AI Carbon Footprint and Responsibilities of AI Scientists

  • Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

    Listen to What They Say: Better Understand and Detect Online Misinformation with User Feedback

  • Code Work: Thinking with the System in Mexico

    Code Work: Thinking with the System in Mexico

  • AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

    AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries

  • Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

    Green Algorithms: Quantifying the Carbon Emissions of Computation (Research Summary)

  • The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

    The Ethical AI Startup Ecosystem 03: ModelOps, Monitoring, and Observability

  • AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

    AI Ethics and Ordoliberalism 2.0: Towards A ‘Digital Bill of Rights

  • Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

    Perspectives and Approaches in AI Ethics: East Asia (Research Summary)

  • The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

    The Larger The Fairer? Small Neural Networks Can Achieve Fairness for Edge Devices

  • U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

    U.S.-EU Trade and Technology Council Inaugural Joint Statement – A look into what’s in store for AI?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.