🔬 Analysis by Philippe Dambly (Senior Lecturer at University of Liège) and Axel Beelen (Legal Consultant specialized in data protection and AI)
[Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM (2021) 206 final of 21 April 2021- 2021/0106 (COD).]
Overview: Artificial intelligence (AI) is one of the technologies that will be able to optimize existing activities (improve purchase predictions) or create new opportunities (autonomous cars). However, the development of AI is not without risk (not at all). As part of its Digital Agenda, the European Commission proposed, on 21 April 2021, a Proposal for a Regulation to harmonize the rules surrounding the use of AI-based applications (89 explanatory recitals and 85 articles). The aim is to create a trusting environment for the development and use of AI systems in Europe.
Introduction
The European Commission’s text is the result of two years of work and consultations with national regulators from EU countries, expert groups (such as AI HLEG), European companies and civil societies. The European approach is based on European values and risks to ensure security and the protection of fundamental rights, even if the correct balance between legal certainty and the development of this new technology is not easy to find. You will find in the lines that follow our analysis of the European text which is, for the moment, still in its EU legislative process[2].
Structure and presentation of the Proposal for a Regulation
Composition of the text of the Regulation
The proposed Regulation is composed of twelve titles:
1. Title 1 “General provisions”: Articles 1 to 4;
2. Title 2 “Prohibited artificial intelligence practices”: Article 5;
3. Title 3 “High-risk AI systems”: Articles 6 to 51;
4. Title 4 “Transparency obligations for certain AI systems”: Article 52;
5. Title 5 “Measures in support for innovation”: Articles 53 to 55;
6. Title 6 “Governance”: Articles 56 to 59;
7. Title 7 “EU database for stand-alone high-risk AI systems”: Article 60;
8. Title 8 “Post-market monitoring, information sharing, market surveillance”: Articles 61 to 68;
9. Title 9 “Codes of Conduct”: Article 69;
10. Title 10 “Confidentiality and penalties”: Articles 70 to 72;
11. Title 11 “Delegation of power and committee procedure”: Articles 73 and 74;
12. Title 12 “Final provisions”: Articles 75 to 85.
Purpose of Future Regulation
The purpose of this European Regulation is to establish:
1. harmonised rules on the placing on the market, putting into service and use of artificial intelligence systems (referred to in the Regulation as ‘AI systems’) in the European Union;
2. the prohibition of certain practices in the field of artificial intelligence;
3. specific requirements for high-risk AI systems and obligations imposed on operators of such systems (we will see which ones later);
4. harmonised transparency rules for AI systems for interacting with natural persons, emotion recognition and biometric categorization systems, and AI systems used to generate or manipulate images or audio or video content;
5. rules on monitoring and market surveillance.
These objectives are therefore multiple and ambitious. They are essential to AI activities in order to respect our rights and freedoms.
Territorial Scope
The proposal applies to:
1. users and providers of AI systems located within the EU;
2. suppliers established outside the EU who are the source of the placing on the market or commissioning of an AI system within the EU;
3. providers and users of AI systems established outside the EU when the results generated by the system are used in the EU.
Excluded from the scope of the Regulation are AI systems developed or used exclusively for military purposes as well as public authorities of a third country and international organisations using AI systems in the framework of international law enforcement and judicial cooperation agreements with the EU or with one or more of its members.
Material Scope
Annex I of the text defines the techniques and approaches which fall within the material scope of the Proposal for a Regulation.
These techniques and approaches must be:
1. machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning and
2.     logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems3.     statistical approaches, Bayesian estimation, search and optimization methods.
AI Definition
The European Commission has chosen a broad and neutral definition of an artificial intelligence system[3], designating it as a software “that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.[4]
AI has evolved a lot over the years. Born at a time when, several decades ago, we swore by expert systems, it has already undergone several modifications – or mutations. Now we’ve come to the age of machine learning, deep learning, supervised and unsupervised learning, reinforcement learning…[5]
Therefore, the authors of the proposed regulatory framework wanted to define AI in the most neutral way possible to cover all techniques – including those that are not yet known or have not yet been developed. The intention is, therefore, through the rules and concepts defined, to encompass the concepts of symbolic AI, machine learning, hybrid systems… With, attached to the regulatory framework, a list of techniques and approaches that may be subject to subsequent adaptation.
Classification by the Risks
The European Commission has chosen a risk-based approach for “trustworthy” artificial intelligence. The four categories are distinguished in the Proposal for a Regulation of April 2021.
The more we progress in the scale, the stricter the conditions to be respected by designers, suppliers, integrator’s, “implementers” and others will be:
1.     unacceptable risk: a prohibition applies to practices that exploit the vulnerability of children or people with disabilities, such as a toy that would induce a baby to engage in behaviour that could harm him. This is also the case for social rating from public authorities – the principle of assigning a rating to “good citizens”, allowing them to access social benefits – as well as for the use of real-time remote biometric identification systems, such as facial recognition cameras directly connected to databases. However, the latter category includes several exceptions, such as the search for a missing child or the location of an author or suspect in cases of terrorism, trafficking in human beings or child pornography;
2.     high risk: rules of traceability, transparency and robustness apply when harm to the safety or rights of individuals is possible. These concerns: biometric identification, management of critical infrastructures (water, electricity, etc.), AI systems intended for assignment in educational institutions or for human resources management, AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), its use for police missions as well as migration management and border controls. High-risk AIs should be recorded in a database maintained by the European Commission and searchable by all;
3.     low risk: where risks are limited, the Commission requires transparency on the part of the supplier. For example, if users use an online chatbot, they must be kept informed that they are talking to a robot so that they can make an informed decision whether or not to proceed;
4.     minimal risk: not all uses that do not pose a risk to citizens’ rights according to the Commission, such as spam filters in emails, are subject to a specific framework.High-risk systems, to receive the green light for their marketing and deployment, will have to meet stringent obligations and high quality and safety standards, such as traceability of the use of the technology (Article 12), transparency vis-Ă -vis users (Article 13) and the need for human control (Article 14). They must also “achieve an appropriate level of accuracy, robustness and cybersecurity” (Article 15). Ex ante and ex post controls are also foreseen. The supplier will have to officially register its artificial intelligence system in an EU database after an assessment of its compliance with the requirements described here (Articles 51 and 60).
Level of the risk | Types of AI and areas of application | Rules and obligations |
Unacceptable risk | – AI applications that manipulate human behavior to deprive users of their free will,- Systems that allow social scoring by States. | Prohibition |
High risk | Remote biometric identification systems. | Real-time use restricted to law enforcement in limited cases:for example, to search for a missing child, prevent a terrorist threat, or detect, locate, identify or prosecute the perpetrator or suspect of a serious criminal offence. These systems must be authorized by a judicial or other independent body. |
High risk | – Critical infrastructure (electricity, transport, etc.), – Education or vocational training, – Product safety components, – Workforce management and access to employment, – Essential private and public services, – Policing, – Management of migration, asylum and border controls, – Administration of justice and democratic processes. | – High quality of data sets to reduce discriminatory results,- Risk assessment and mitigation systems,- Recording of activities to ensure traceability ofresults,- Detailed documentation,- Clear and adequate information forthe user,- Appropriate human control,- High level of robustness, security and accuracy. |
Limited risk | Chatbots, callbots, voicebots… | AI systems to which specific obligations on transparency vis-Ă -vis users apply. |
Minimal risk | Video games, spam filters…. | Free use without regulatory obligation. |
Control and Supervisory Authorities
In this regard, the European Commission has opted for a very different approach from that of the GDPR.
Each Member State will have to designate a supervisory authority responsible for monitoring the application of the Regulation in its national territory. In the case of cross-border situations, no one-stop shop-style mechanism has been provided. The Commission is just planning the creation of a European Committee on Artificial Intelligence to coordinate all these measures.
Penalties may be up to €30 million or 6% of turnover in the event of non-compliance with the rules relating to prohibited practices or the use of data (Article 71).
For applications with a low or minimal risk of infringement of fundamental rights, providers are encouraged to apply voluntary codes of conduct on a voluntary basis.
In addition to this new proposal for a regulation, an amendment to the 2006 Machinery Directive is planned to take account of technological developments in this field. It should take the form of a regulation on the issue of machine safety, including construction machinery, 3D printers or industrial lines that could have an artificial intelligence component in their operation.
The Different Actors of the Text
The Commission also encourages companies to anticipate these risks in the design and operation of their AI products, by internally defining preventive “good behaviors” through codes of conduct.
In this sense, the draft AI regulation is a regulation that encourages business ethics. It also provides for a system of certification of companies’ AI compliance devices by “evaluation bodies”.To obtain this certification, companies will have to set up a “quality management” system that is found through the standards of the International Standard Organization (ISO) [which enacts the technical standards imposed on companies]. The IA regulation is therefore in a way a standard regulation.
The positive points of the proposal for a Regulation
We can highlight several positive points stemming from the proposal for a Regulation:
1. it is wrong to constantly point out that legislating too early prevents the development of new technologies (and AI is a new technology). The law then comes to frame potential abuses to prevent their occurrence and their societal nuisance;
2.     the proposed text has, like the GDPR, an extra-European vision. We take this point as an advantage, thinking that the European values of protection of fundamental and human rights are central to the states of law.
Issues
Compliance costs, potentially high for companies, regarding artificial intelligence systems considered high risk.
The fact that the proposal does not contain a complete ban on the use of “facial recognition and remote biometric recognition for mass surveillance”. Preliminary analyses should also include the carbon footprint of the various tools.
Several applications of artificial intelligence are missing in the “high risk” category, such as those for determining an insurance premium, evaluating medical treatments or for health research purposes.[6]
Some intrusive forms of AI should be banned, such as social scoring by private companies that can amass large amounts of data – the ban in the Commission’s proposal only applies to public authorities in this area – or technologies to infer a person’s emotions.
The vagueness of certain definitions and the inaccuracies of certain measures should also be pointed out. Conflicts of interpretation are likely to be numerous. For example, the ban on facial recognition “in real time”, remains possible “within a significant period”, under certain conditions. These are issues that may be raised by the European Council and Parliament, which must amend the text before it enters into force.Also who should monitor the application of the rules enacted? The European institutions? Each Member State? According to an efficient coordinated approach? As foreseen at this stage in the proposal for the European regulation, governance and enforcement are more the responsibility of national authorities. However, why not introduce a more balanced approach to supervision that would be carried out by national authorities together with the new European Supervisory Authority? This proposal would prevent the creation of inequalities between countries, causing de facto difficulties in terms of the conditions for the deployment and implementation of good AI solutions or the way in which actors and users see them in the different countries.
Regulation and ethics
Surprisingly enough, despite the great increase in the use of ethics in the various documents[7] preceding the proposal for a regulation, we do not find any trace of ethics in the body of the text of the proposal for a regulation.
We note in the recitals two passages (of which only recital 5 is relevant to our remarks) where EU speaks about ethics:
· Recital 5 : By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council 33 , and it ensures the protection of ethical principles, as specifically requested by the European Parliament ;
· Recital 16 : The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden. Such AI systems deploy subliminal components individuals cannot perceive or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person. The intention may not be presumed if the distortion of human behaviour results from factors external to the AI system which are outside of the control of the provider or the user. Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm and such research is carried out in accordance with recognized ethical standards for scientific research
In the explanatory memorandum, eleven occurrences of the word ethics are present:
· It aims to contribute to the achievement of the objective formulated by the European Council to make the Union a leading global player in the development of safe, reliable and ethical artificial intelligence, and it guarantees the protection of ethical principles expressly requested by the European Parliament.
We are a far cry from the 99 occurrences in the Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020[8]. It should be noted that the use of the concept of ethics in this Communication had surprised a number of lawyers[9].However, it appears that what the European Commission implicitly considers as ethical is what is in conformity (i.e. what does not infringe) with the values of the European Union[10] and the fundamental rights enshrined in the EU Charter of Fundamental Rights[11].
Conclusion
Ethics is a central point of most of the criticism around AI.
Now, in the absence of any regulation, individuals may not fully realize the impact of this technology in their lives. AI systems can lead to risks of bias, discrimination, confinement, exclusion, loss of cultural pluralism, or the use of tools that are “black boxes” as well as techno-authoritarianism.[12]
Creating a regulatory and legal framework common to the 27 countries of the European Union is therefore a real and important necessity. This piece of regulation will make the EU possible to fiercely and adequately compete with China and the United States.
References
[1] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts, COM (2021) 206 final of 21 April 2021- 2021/0106 (COD).
[2] In 2019, following the European Commission’s outline for AI in Europe, two advisory bodies collaborated on the publication of the ethical guidelines for trustworthy AI. The High-Level Expert Group on Artificial Intelligence (AI HLEG) drafted the guidelines, after consulting with members of the European AI Alliance, which is a multi-stakeholder forum created to provide feedback on regulatory initiatives related to AI. The guidelines proposed seven key requirements that AI systems must meet to be considered trustworthy: 1) human action and human control, 2) technical robustness and security, 3) privacy and data governance, 4) transparency, 5) diversity, non-discrimination and equity, 6) societal and environmental well-being, and 7) accountability. The ethical guidelines (and the assessment tool created to operationalize the guidelines) helped to frame the discussions and structure the debate for the next phases of legislative action. In February 2020, the European Commission built on these guidelines in its White Paper “On Artificial Intelligence: A European Approach to Excellence and Trust”. The White Paper announced the upcoming regulatory measures and presented the key elements of the future framework. Among these key elements was the risk-based approach suggesting that mandatory legal requirements – derived from ethical principles – should be imposed on high-risk AI systems. The White Paper was followed by a public consultation process involving 1,200 stakeholders from diverse backgrounds: citizens, academics, EU Member States, civil society, business and industry. More recently, the Proposal for a Regulation on European Data Governance – or Data Governance Act – has gone further, laying the foundations for a harmonised mechanism for the re-use of certain protected public sector data, such as those covered by intellectual property rights. Provisions are also made to facilitate the processing of personal information, collected with the consent of the individuals concerned, for non-commercial purposes, for medical research, the fight against climate change or the improvement of public services for example. This is called « data altruism ».
[3] Marvin Minsky (one of the founders of artificial intelligence) gave the following definition of artificial intelligence: “Theories and techniques of making machines do what man would do with a certain intelligence.”
[4]Proposal for a Regulation, Art. 3.1 which refers to an Appendix 1 “Artificial Intelligence Techniques and Approaches”. This broad approach echoes the two pillars of AI cited by French MP CĂ©dric Villani in his 2018 parliamentary report: “understanding how human cognition works and reproducing it; create cognitive processes comparable to those of the human being” (report available at the following address: https://www.vie-publique.fr/rapport/37225-donner-un-sens-lintelligence-artificielle-pour-une-strategie-nation).
[5]For a complete history of artificial intelligence and digital see https://www.historyofcomputers.tech/.
[6]On the subject see the report: Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European insurance sector, EIOPIA, 2021. For a summary of the report see A. BELEEN and Ph. DAMBLY, IA and Assurances – Analysis of the EIOPA report, ACE Accounting, taxation, audit, business law in Luxembourg, Wolters Kluwer, n° 8, 2021, pp. 9 to 25.
[7] Communication from the European Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions of 19 February 2020 entitled ‘White Paper – Artificial intelligence – A European approach focused on excellence and trust’ (COM(2020) 65 final).
[8] In this Communication, the text of the requested legislative proposal contained 25 occurrences of the word ethics and the recitals 64 occurrences of that word.
[9]“The Parliament’s proposal is to provide a regulatory framework for the consideration of the ethical aspects of the development and exploitation of AI tools, a broadly defined concept to which we will return. The subject is surprising. Is not ethics by definition an individual reflection, a questioning, a research committed to a better living together, and this, in a given context in the face of various possibilities of action. If it is undoubtedly enlightened by moral values, it is not reduced to them, it refers to an individual reflection that requires that everyone, in a given context and according to his possibilities of action, on the one hand, question what means for him, while taking into account those and what surrounds him, the Good and the Just and, on the other hand, act according to this judgment. » in Y Poulet, Le RGPD face aux dĂ©fis de l’intelligence artificielle, CRIDS – Larcier, 2020, p. 139 (our translation).
[10] Recital 1 of the Proposal for a Regulation: “The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation. ».
[11] Fundamental rights constitute a special category of the sources of law, partially primary and partly subsidiary. According to Article 6.1 of the Treaty on European Union (TEU), “The Union recognises the rights, freedoms and principles set out in the Charter of Fundamental Rights of the European Union of 7 December 2000, as adapted at Strasbourg, on 12 December 2007”, which shall have the same legal value as the Treaties. This Article 6. 1. of the TEU thus attributes to the Charter of Fundamental Rights of the European Union the same legal value as to the Treaties, while Article 6. 3. of the TEU explicitly recognises “Fundamental rights, as guaranteed by the European Convention for the Protection of Human Rights and Fundamental Freedoms and as they result from the constitutional traditions common to the Member States, shall constitute general principles of the Union’s law.”
[12]Ph. Dambly, Webinar “Law, algorithms, ethics”,2020 available on https://dauphine.psl.eu/dauphine/media-et-communication/article/webinaire-droit-algorithmes-ethique.