• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments

June 27, 2023

✍️ Article by Dessislava Fessenko, a Master of Bioethics candidate in the Center for Bioethics at Harvard Medical School. She is also an antitrust and technology lawyer and a policy researcher working on AI policy and governance. Her research interests include fairness, the overall ethical alignment of artificial intelligence, and social justice more broadly.


Overview: On 14 June 2023, the European Parliament adopted its proposed amendments (in legalese: negotiating position) to the draft European Union AI Act (the “Draft EU AI Act”). The proposed amendments significantly reshape the proposal by the European Commission by fine-tuning and expanding the scope of the Draft EU AI Act, its risk mitigation requirements, and its governance mechanism. The amendments also firmly center on universal ethical principles and human rights as ultimate benchmarks for assessing the social acceptability of AI systems developed and deployed in and out of the European Union.


Why the European Parliament’s amendments matter

The European Parliament’s amendments mark an important juncture in adopting the Draft EU AI Act. The legislative process started more than two years ago when the European Commission, the EU’s executive body, tabled its proposal for the Act in April 2021.  The proposal went through extensive scrutiny and deliberations at the legislative and political branches of the European Union – the European Parliament and the Council of the European Union, respectively. The Council concluded the political discussions among all EU member states and adopted amendments to the Draft EU AI Act in November 2022. With the delivery of the European Parliament’s position, the shape, form, and trajectory of possible further changes to the Draft EU AI Act are largely predetermined. What remains to be still seen is which position – the European Parliament’s or the Council’s – will prevail on points where these positions diverge.  

More clarity in this regard is expected in the next six months. During that period, the European Parliament, the Council of the European Union, and the European Commission will hold negotiations (the so-called “trialogue”) and have to compromise and agree on a final text of the Draft EU AI Act. While the Council largely accepted and made few significant changes to the European Commission’s proposal, the European Parliament revised it in significant ways, which may be politically difficult to bypass. This article provides an overview of those revisions.      

The Inception

The Draft EU AI Act was essentially conceived as a product safety regulation. In its initial proposal for the Act, the European Commission justifies the need for regulating AI with the risks that the technology poses to, first and foremost, health and safety and, then, to persons’ fundamental rights.[1] This angle sets the Draft EU AI Act on a footing that rationalizes intervention only with respect to the marketing of AI systems, i.e., only as long as such systems are to be placed on the market or put in service and concerning the terms for this (read further details here and here, and section 7 below).[2] The only substantive provisos that the European Commission’s proposal contains concern the types of prohibited AI uses, the classification of high-risk AI systems, and the quality of the training, testing, and validation datasets used in high-risk AI systems.[3] 

This approach underpins the regulatory design and frames the substance of the Draft EU AI Act in several significant ways. First, the Draft EU AI Act prescribes, to a greater extent, pre-market risk mitigation and certification protocols and procedures to be followed for an AI system to gain access to the European Union’s market and, to a lesser degree, substantive requirements for characteristics and performance of AI systems. Second, such “meatier” requirements will essentially be set through industry standards to be adopted separately.[4] Only then the full scope and breadth of compliance required will crystalize. Third, the Draft EU AI Act tackles concerns regarding AI’s social impact in a more remote way than touted and hoped for. The enforcement of other laws (e.g., human rights, labor, non-discrimination, and data protection regulations) would still play a significant role in addressing possible societal implications of the use of AI. Fourth, the Draft EU AI Act will not be the only regulation governing the use of AI systems and their interactions with real-world environments. Human rights laws and data protection, data governance, cyber security, consumer protection, and platform regulations, among others, will still apply to various aspects of the AI supply chain and operations, which will make compliance more complex.[5]  

The European Parliament’s amendments attempt – although only partially and with mixed success – to remedy some of these deficiencies.

Acknowledgment of Risks 

The European Parliament recognizes more openly the nature and breadth of the risks stemming from AI. They range from the technology’s opacity, inaccuracy, gradual displacement of the human from the loop, AI’s extensive processing of data (some of which is protected, e.g., personal or copyrighted), to AI’s deployment to various forms of monitoring, screening, surveillance and predictive analytics of human behavior.[6] The substantial energy consumption for and resulting environmental harm from training and operating AI models is a prominent concern in the European Parliament’s list.[7] 

The impact of technology on fundamental human rights (e.g., data protection, individual privacy, labor, intellectual property), on society at large, and on the environment appears to have led the European Parliament to put the protection of these rights and artifacts front and center in its amendments to the Draft EU AI Act. This shift in focus also reshapes the Draft EU AI Act in more notable ways. 

Partial Re-Calibration

The European Parliament’s amendments more directly target forms, applications, and uses of AI with actual or potential adverse effects on individual rights, society, and the environment. 

Foundation models (including generative ones) and general-purpose AI are recognized to represent sources of such risks and to merit oversight. To that end, the European Parliament expands the application of the Draft EU AI Act to cover also these types of AI. A foundation model qualifies as “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” [8]  General purpose AI represents “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”[9]

From its inception, the Draft EU Act bans certain AI applications that threaten or undermine persons’ agency, privacy, and right of defense, e.g., subliminal techniques, social scoring, and certain instances of remote biometric identification.[10] The European Parliament expands these bans to include, e.g., all instances of remote biometric identification, emotion recognition, and certain forms of facial recognition.[11]

The Draft EU AI Act imposes certain risk mitigation obligations on AI systems that are considered to pose high risks to health and safety and fundamental rights. Which systems qualify as “high-risk” is determined by bespoke risk classification annexed to the Draft EU AI Act (read further details here).[12] The European Parliament now reshuffles and expands this risk classification to target systems with pertinent effects on persons’ fundamental rights, access to opportunities, and voting behavior (in the context of elections), e.g., biometric and biometrics-based systems, systems deployed for determining eligibility for social services, recommender systems of social media.[13] 

Ethical Alignment

The initial proposal for the Draft EU AI Act lacks an explicit anchor in specific principles that should guide and inform its interpretation, implementation, and enforcement. The European Parliament’s amendments try to remedy that deficiency by proposing a framework of six key tenets. They include (i) human agency and oversight, (ii) technical robustness and safety, (iii) privacy and data governance, (iv) transparency, (v) diversity, non-discrimination and fairness, and (vi) social and environmental well-being. All operators of all AI systems, irrespective of their type and risk profile, should “make best efforts” to comply with these principles when developing and using AI systems.[14]

The addition of this framework of principles appears to come and serve as a direct response to the sociotechnical risks and harms from the use of AI described in section 2. The ethical and human rights grounding of this framework is obvious. It entails a series of value determinations as to what is considered morally required/right/good based on the prevailing European moral ideals. Applying this framework of principles would require a sensitive, all-things-considered approach to assessing and ensuring an AI system’s social and technical robustness, reliability, and adequacy. This would alter the essence of compliance required under the EU Act from process-bound to more effect-driven. 

The Data at the Heart

The Draft EU AI Act introduces stringent data governance requirements from the outset.[15] The European Parliament, however, puts an even stronger emphasis on the significance of data and its adequate protection for the trustworthy operations and use of AI systems. The Parliament’s amendments seek to partially re-purpose the EU Draft AI Act into a regulation also aimed to uphold individual rights of data protection and privacy (in addition to increased market integration within the European Union).[16]  

Furthermore, providers of high-risk AI systems are expected to attain and maintain high-quality training, validation, and testing datasets. “Specific attention” should be paid to mitigating biases in the data.[17]  Data sets must also be context-sensitive enough to account for the “features, characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting or context within which the AI system is intended to be used.”[18]

Delegated Risk Mitigation 

The Draft EU AI Act introduces a bespoke regime for mitigating the risks and possible harms of using only high-risk AI systems. This is done by delegating to the providers of high-risk AI four broader types of obligations: (i) for establishing and maintaining risk management processes systems, (ii) for transparency and disclosure, (iii) for taking technical and organizational measures to ensure the robustness, accuracy, and cybersecurity of the high-risk AI systems, and (iv) for implementing quality management and assurance.[19] 

The European Parliament effectively draws a baseline concerning risk management by noting that the respective protocols and procedures should be “objective-driven, fit for purpose, reasonable and effective” and mitigate the “reasonably foreseeable” risks from and possible misuses of AI systems.[20] 

Providers will be allowed to conduct quality assurance in-house with respect to most high-risk AI. A smaller proportion of high-risk AI will qualify for external certification (the so-called “conformity assessment”) by a third independent party (pre-market certifiers). However, regardless of the type of high-risk AI systems, their deployers must conduct fundamental rights impact assessments, draw up detailed risk mitigation plans and consult them with relevant stakeholders and affected groups.[21]

Although not considered high-risk systems, foundation models must meet similar requirements for their design, development, data governance, risk, and quality management as for high-risk AI.[22]

Conclusion

Adopting the Draft EU AI Act has reached a crossroads at which its efficacy and effectiveness in addressing the sociotechnical risks and possible harms of using AI are to be decided. The European Union lawmakers are now to decide between retaining the initial regulatory approach proposed by the European Commission, adopting the amendments by the European Parliament or by the Council of the European Union, or some mixture of the three. The European Commission’s proposal for and the Council’s amendments to the Draft EU AI Act entail fewer value determinations and actual substantive requirements, more process-bound and transparency-based compliance, and no principle-based hardwiring of the Draft EU AI Act. The European Parliament’s amendments ground the Draft EU AI Act in ethics and human rights, which would likely drive more effect-based compliance beyond the possible mere box-ticking. A mixture of these approaches would likely result from a series of compromises to reach a middle ground. If this middle ground turns out to be the lowest common denominator, the efficacy and effectiveness of the EU AI Act may equally be at stake.

References

[1].  Pursuant to, e.g., recitals 1 and 5 and Article 1 of the European Commission’s proposal for the Draft EU AI Act, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 .

[2].  Contained in Titles III and IV of the European Commission’s proposal for the Draft EU AI Act.

[3].  Pursuant to Articles 5 and 10 of European Commission’s proposal for the Draft EU AI Act.

[4].  Pursuant to Articles 40 and the following of the European Commission’s proposal for the Draft EU AI Act.

[5].  Pursuant to, e.g., recitals 7, 24, 28, 72 of European Commission’s proposal for the Draft EU AI Act.

[6].  To that effect, e.g., recital 6a and 36 of the European Parliament’s negotiation position on the Draft EU AI Act, available at: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html 

[7].  To that effect, e.g. recitals 46 and 48b, and Article 28b of the European Parliament’s negotiation position on the Draft EU AI Act.

[8].  Article 3, point 1(c)(new) of the European Parliament’s negotiation position on the Draft EU AI Act.

[9].  Article 3, point 1(d) (new) of the European Parliament’s negotiation position on the Draft EU AI Act.

[10].  Pursuant to Article 5 of the European Commission’s proposal for the Draft EU AI Act.

[11].  Pursuant to Articles 5 of the European Parliament’s negotiation position on the Draft EU AI Act.

[12].  In Annex III to the European Commission’s proposal for the Draft EU AI Act.

[13].  In Annex III to the European Parliament’s negotiation position on the Draft EU AI Act.

[14].  Pursuant to Article 4a(new) of the European Parliament’s negotiation position on the Draft EU AI Act.

[15].  Pursuant to recital 2 and Article 10 of the European Commission’s proposal for the Draft EU AI Act.

[16].  To that effect, recital 2a of the European Parliament’s negotiation position on the Draft EU AI Act

[17].  To that effect, recitals 44 and 45 of the European Parliament’s negotiation position on the Draft EU AI Act.

[18].  Pursuant to Article 10(2) in conjunction with recital 44 of European Parliament’s negotiation position on the Draft EU AI Act.

[19].  Titles III and IV of the European Commission’s proposal for the Draft EU AI Act.

[20].  To that effect, e.g., recital 42 and Article 8(2) of the European Parliament’s negotiation position on the Draft EU AI Act.

[21].  Pursuant to Article 29(6) of the European Parliament’s negotiation position on the Draft EU AI Act.

[22].  Under Articles 29a and the following of the European Parliament’s negotiation position on the Draft EU AI Act.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

  • Unstable Diffusion: Ethical challenges and some ways forward

    Unstable Diffusion: Ethical challenges and some ways forward

  • Are we ready for a multispecies Westworld?

    Are we ready for a multispecies Westworld?

  • How Do We Teach Tech Ethics? How Should We?

    How Do We Teach Tech Ethics? How Should We?

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

    AI Governance on the Ground: Canada’s Algorithmic Impact Assessment Process and Algorithm has evolve...

  • The Ethical Considerations of Self-Driving Cars

    The Ethical Considerations of Self-Driving Cars

  • Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

    Rethinking Gaming: The Ethical Work of Optimization in Web Search Engines (Research Summary)

  • If It's Free, You're the Product: The New Normal in a Surveillance Economy

    If It's Free, You're the Product: The New Normal in a Surveillance Economy

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.