✍️ Article by Dessislava Fessenko, a Master of Bioethics candidate in the Center for Bioethics at Harvard Medical School. She is also an antitrust and technology lawyer and a policy researcher working on AI policy and governance. Her research interests include fairness, the overall ethical alignment of artificial intelligence, and social justice more broadly.
Overview: On 14 June 2023, the European Parliament adopted its proposed amendments (in legalese: negotiating position) to the draft European Union AI Act (the “Draft EU AI Act”). The proposed amendments significantly reshape the proposal by the European Commission by fine-tuning and expanding the scope of the Draft EU AI Act, its risk mitigation requirements, and its governance mechanism. The amendments also firmly center on universal ethical principles and human rights as ultimate benchmarks for assessing the social acceptability of AI systems developed and deployed in and out of the European Union.
Why the European Parliament’s amendments matter
The European Parliament’s amendments mark an important juncture in adopting the Draft EU AI Act. The legislative process started more than two years ago when the European Commission, the EU’s executive body, tabled its proposal for the Act in April 2021. The proposal went through extensive scrutiny and deliberations at the legislative and political branches of the European Union – the European Parliament and the Council of the European Union, respectively. The Council concluded the political discussions among all EU member states and adopted amendments to the Draft EU AI Act in November 2022. With the delivery of the European Parliament’s position, the shape, form, and trajectory of possible further changes to the Draft EU AI Act are largely predetermined. What remains to be still seen is which position – the European Parliament’s or the Council’s – will prevail on points where these positions diverge.
More clarity in this regard is expected in the next six months. During that period, the European Parliament, the Council of the European Union, and the European Commission will hold negotiations (the so-called “trialogue”) and have to compromise and agree on a final text of the Draft EU AI Act. While the Council largely accepted and made few significant changes to the European Commission’s proposal, the European Parliament revised it in significant ways, which may be politically difficult to bypass. This article provides an overview of those revisions.
The Draft EU AI Act was essentially conceived as a product safety regulation. In its initial proposal for the Act, the European Commission justifies the need for regulating AI with the risks that the technology poses to, first and foremost, health and safety and, then, to persons’ fundamental rights. This angle sets the Draft EU AI Act on a footing that rationalizes intervention only with respect to the marketing of AI systems, i.e., only as long as such systems are to be placed on the market or put in service and concerning the terms for this (read further details here and here, and section 7 below). The only substantive provisos that the European Commission’s proposal contains concern the types of prohibited AI uses, the classification of high-risk AI systems, and the quality of the training, testing, and validation datasets used in high-risk AI systems.
This approach underpins the regulatory design and frames the substance of the Draft EU AI Act in several significant ways. First, the Draft EU AI Act prescribes, to a greater extent, pre-market risk mitigation and certification protocols and procedures to be followed for an AI system to gain access to the European Union’s market and, to a lesser degree, substantive requirements for characteristics and performance of AI systems. Second, such “meatier” requirements will essentially be set through industry standards to be adopted separately. Only then the full scope and breadth of compliance required will crystalize. Third, the Draft EU AI Act tackles concerns regarding AI’s social impact in a more remote way than touted and hoped for. The enforcement of other laws (e.g., human rights, labor, non-discrimination, and data protection regulations) would still play a significant role in addressing possible societal implications of the use of AI. Fourth, the Draft EU AI Act will not be the only regulation governing the use of AI systems and their interactions with real-world environments. Human rights laws and data protection, data governance, cyber security, consumer protection, and platform regulations, among others, will still apply to various aspects of the AI supply chain and operations, which will make compliance more complex.
The European Parliament’s amendments attempt – although only partially and with mixed success – to remedy some of these deficiencies.
Acknowledgment of Risks
The European Parliament recognizes more openly the nature and breadth of the risks stemming from AI. They range from the technology’s opacity, inaccuracy, gradual displacement of the human from the loop, AI’s extensive processing of data (some of which is protected, e.g., personal or copyrighted), to AI’s deployment to various forms of monitoring, screening, surveillance and predictive analytics of human behavior. The substantial energy consumption for and resulting environmental harm from training and operating AI models is a prominent concern in the European Parliament’s list.
The impact of technology on fundamental human rights (e.g., data protection, individual privacy, labor, intellectual property), on society at large, and on the environment appears to have led the European Parliament to put the protection of these rights and artifacts front and center in its amendments to the Draft EU AI Act. This shift in focus also reshapes the Draft EU AI Act in more notable ways.
The European Parliament’s amendments more directly target forms, applications, and uses of AI with actual or potential adverse effects on individual rights, society, and the environment.
Foundation models (including generative ones) and general-purpose AI are recognized to represent sources of such risks and to merit oversight. To that end, the European Parliament expands the application of the Draft EU AI Act to cover also these types of AI. A foundation model qualifies as “an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.”  General purpose AI represents “an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed.”
From its inception, the Draft EU Act bans certain AI applications that threaten or undermine persons’ agency, privacy, and right of defense, e.g., subliminal techniques, social scoring, and certain instances of remote biometric identification. The European Parliament expands these bans to include, e.g., all instances of remote biometric identification, emotion recognition, and certain forms of facial recognition.
The Draft EU AI Act imposes certain risk mitigation obligations on AI systems that are considered to pose high risks to health and safety and fundamental rights. Which systems qualify as “high-risk” is determined by bespoke risk classification annexed to the Draft EU AI Act (read further details here). The European Parliament now reshuffles and expands this risk classification to target systems with pertinent effects on persons’ fundamental rights, access to opportunities, and voting behavior (in the context of elections), e.g., biometric and biometrics-based systems, systems deployed for determining eligibility for social services, recommender systems of social media.
The initial proposal for the Draft EU AI Act lacks an explicit anchor in specific principles that should guide and inform its interpretation, implementation, and enforcement. The European Parliament’s amendments try to remedy that deficiency by proposing a framework of six key tenets. They include (i) human agency and oversight, (ii) technical robustness and safety, (iii) privacy and data governance, (iv) transparency, (v) diversity, non-discrimination and fairness, and (vi) social and environmental well-being. All operators of all AI systems, irrespective of their type and risk profile, should “make best efforts” to comply with these principles when developing and using AI systems.
The addition of this framework of principles appears to come and serve as a direct response to the sociotechnical risks and harms from the use of AI described in section 2. The ethical and human rights grounding of this framework is obvious. It entails a series of value determinations as to what is considered morally required/right/good based on the prevailing European moral ideals. Applying this framework of principles would require a sensitive, all-things-considered approach to assessing and ensuring an AI system’s social and technical robustness, reliability, and adequacy. This would alter the essence of compliance required under the EU Act from process-bound to more effect-driven.
The Data at the Heart
The Draft EU AI Act introduces stringent data governance requirements from the outset. The European Parliament, however, puts an even stronger emphasis on the significance of data and its adequate protection for the trustworthy operations and use of AI systems. The Parliament’s amendments seek to partially re-purpose the EU Draft AI Act into a regulation also aimed to uphold individual rights of data protection and privacy (in addition to increased market integration within the European Union).
Furthermore, providers of high-risk AI systems are expected to attain and maintain high-quality training, validation, and testing datasets. “Specific attention” should be paid to mitigating biases in the data. Data sets must also be context-sensitive enough to account for the “features, characteristics or elements that are particular to the specific geographical, contextual, behavioural or functional setting or context within which the AI system is intended to be used.”
Delegated Risk Mitigation
The Draft EU AI Act introduces a bespoke regime for mitigating the risks and possible harms of using only high-risk AI systems. This is done by delegating to the providers of high-risk AI four broader types of obligations: (i) for establishing and maintaining risk management processes systems, (ii) for transparency and disclosure, (iii) for taking technical and organizational measures to ensure the robustness, accuracy, and cybersecurity of the high-risk AI systems, and (iv) for implementing quality management and assurance.
The European Parliament effectively draws a baseline concerning risk management by noting that the respective protocols and procedures should be “objective-driven, fit for purpose, reasonable and effective” and mitigate the “reasonably foreseeable” risks from and possible misuses of AI systems.
Providers will be allowed to conduct quality assurance in-house with respect to most high-risk AI. A smaller proportion of high-risk AI will qualify for external certification (the so-called “conformity assessment”) by a third independent party (pre-market certifiers). However, regardless of the type of high-risk AI systems, their deployers must conduct fundamental rights impact assessments, draw up detailed risk mitigation plans and consult them with relevant stakeholders and affected groups.
Although not considered high-risk systems, foundation models must meet similar requirements for their design, development, data governance, risk, and quality management as for high-risk AI.
Adopting the Draft EU AI Act has reached a crossroads at which its efficacy and effectiveness in addressing the sociotechnical risks and possible harms of using AI are to be decided. The European Union lawmakers are now to decide between retaining the initial regulatory approach proposed by the European Commission, adopting the amendments by the European Parliament or by the Council of the European Union, or some mixture of the three. The European Commission’s proposal for and the Council’s amendments to the Draft EU AI Act entail fewer value determinations and actual substantive requirements, more process-bound and transparency-based compliance, and no principle-based hardwiring of the Draft EU AI Act. The European Parliament’s amendments ground the Draft EU AI Act in ethics and human rights, which would likely drive more effect-based compliance beyond the possible mere box-ticking. A mixture of these approaches would likely result from a series of compromises to reach a middle ground. If this middle ground turns out to be the lowest common denominator, the efficacy and effectiveness of the EU AI Act may equally be at stake.
. Pursuant to, e.g., recitals 1 and 5 and Article 1 of the European Commission’s proposal for the Draft EU AI Act, available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206 .
. Contained in Titles III and IV of the European Commission’s proposal for the Draft EU AI Act.
. Pursuant to Articles 5 and 10 of European Commission’s proposal for the Draft EU AI Act.
. Pursuant to Articles 40 and the following of the European Commission’s proposal for the Draft EU AI Act.
. Pursuant to, e.g., recitals 7, 24, 28, 72 of European Commission’s proposal for the Draft EU AI Act.
. To that effect, e.g., recital 6a and 36 of the European Parliament’s negotiation position on the Draft EU AI Act, available at: https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.html
. To that effect, e.g. recitals 46 and 48b, and Article 28b of the European Parliament’s negotiation position on the Draft EU AI Act.
. Article 3, point 1(c)(new) of the European Parliament’s negotiation position on the Draft EU AI Act.
. Article 3, point 1(d) (new) of the European Parliament’s negotiation position on the Draft EU AI Act.
. Pursuant to Article 5 of the European Commission’s proposal for the Draft EU AI Act.
. Pursuant to Articles 5 of the European Parliament’s negotiation position on the Draft EU AI Act.
. In Annex III to the European Commission’s proposal for the Draft EU AI Act.
. In Annex III to the European Parliament’s negotiation position on the Draft EU AI Act.
. Pursuant to Article 4a(new) of the European Parliament’s negotiation position on the Draft EU AI Act.
. Pursuant to recital 2 and Article 10 of the European Commission’s proposal for the Draft EU AI Act.
. To that effect, recital 2a of the European Parliament’s negotiation position on the Draft EU AI Act
. To that effect, recitals 44 and 45 of the European Parliament’s negotiation position on the Draft EU AI Act.
. Pursuant to Article 10(2) in conjunction with recital 44 of European Parliament’s negotiation position on the Draft EU AI Act.
. Titles III and IV of the European Commission’s proposal for the Draft EU AI Act.
. To that effect, e.g., recital 42 and Article 8(2) of the European Parliament’s negotiation position on the Draft EU AI Act.
. Pursuant to Article 29(6) of the European Parliament’s negotiation position on the Draft EU AI Act.
. Under Articles 29a and the following of the European Parliament’s negotiation position on the Draft EU AI Act.