✍️ Article by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.
Overview: The first-ever legal framework for AI regulation: the Artificial Intelligence Act was proposed by the European Commission on April 21, 2021, with the following specific objectives:
- Ensure that AI systems placed on the EU market are safe and respectful of fundamental rights and Union values;
- Ensure legal certainty to facilitate investment and innovation in AI;
- Enhance governance and enforcement of the law on fundamental rights and safety requirements that apply to AI systems;
- To facilitate the development of safe and trustworthy AI applications and prevent market fragmentation.
The proposed rules would be enforced through a system at the Member States’ level with a cooperating mechanism at the Union level with the establishment of a European Artificial Intelligence Board. Other measures are proposed to reduce the regulatory burden and support innovation in Small and Medium-sized Enterprises and startups. This proposal is coherent “with the Commission’s overall digital strategy in its contribution to promoting technology that works for people, one of the three main pillars of the policy orientation and objectives announced in the Communication Shaping Europe’s digital future.” The AI proposal is closely linked to the Data Governance Act and the Open Data Directive, which will establish mechanisms and services for using, sharing, and pooling data that are crucial for developing data-driven AI.
Definition of Artificial Intelligence under the Act
The proposal doesn’t provide any definition of AI. However, instead, it defines AI systems. AI system (Article 3(1)) is defined as “ software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The definition is expansive and is mainly derived from the Organization for Economic Cooperation and Development’s definition. However, the Council in November 2021 included a modified definition of AI that would narrow the scope to machine learning.
A Risk-based approach
The AI act classifies AI systems into four (4) risk-based divisions wherein the higher risk levels would have to comply with additional safeguards mechanisms. AI systems that represent unacceptable risks are prohibited, and high-risk systems must comply with specific requirements. Low-risk systems must comply with some or no requirements. The four types of risks are as follows:
- Unacceptable Risks: Prohibited AI Practices – Article 5 of the AI Act bans harmful AI practices that are considered to be a threat to peoples’ safety, rights, and livelihood due to the unacceptable risks they create. For instance:
- AI systems that deploy harmful manipulative ‘subliminal techniques’
- AI systems that exploit specific vulnerable groups (physical or mental disability)
- Social scoring used by public authorities
- Real-time remote biometric identification systems used by law enforcement, except in limited circumstances.
- High risk: Regulated high-risk AI systems – Article 6 of the AI Act regulates AI systems that create a high risk to safety and fundamental rights but do not fall under the ‘unacceptable risks’ category. Further, the draft expands on these two categories:
- AI systems are deployed in eight specific fields, as stated in Annex III, which the commission is authorized to update as it deems fit (Article 7). These include biometric identification and categorization of natural persons; educational and vocational training; management of critical infrastructure; law enforcement; administration of justice and democratic processes; and border control management.
Furthermore, the providers of high-risk AI systems would have to register their systems in the EU database before deploying AI in the market or putting them into service. Providers of AI systems not governed by the AI Act would have to conduct their conformity assessment to prove that they comply with the requirements for high-risk AI systems. High-risk AI systems also have to comply with additional requirements such as risk-management, testing, data training, and data governance, human oversight, transparency, and cybersecurity.
- Limited risk: transparency obligations – AI systems with limited risk, such as chatbots, emotion recognition systems, and deepfakes, would be subjected to a limited set of transparency obligations.
- Low or minimal risk: no obligations – AI systems that present minimal risk can be used in the EU without additional legal obligations. However, the Act creates codes of conduct to encourage providers to apply the mandatory requirements for high-risk AI systems voluntarily.
The risk-based approach focuses on ‘organizing AI practices and systems based on risk level.’ By classifying different AI systems by risk level, the European Commission seems to be focused on managing AI risks and can be seen as a risk manager. Scholars believe that a large part of the Proposal is phrased in the language of risk management. For instance, the explanatory memorandum mentions that the risks should be ‘calculated taking into account the impact on rights and safety. Additionally, the Proposal intends to tailor the regulations to the ‘intensity and scope of the risks that AI systems can generate.