• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Regulating Artificial Intelligence: The EU AI Act – Part 1

November 27, 2022

✍️ Article by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.


Overview: The first-ever legal framework for AI regulation: the Artificial Intelligence Act was proposed by the European Commission on April 21, 2021, with the following specific objectives:

  • Ensure that AI systems placed on the EU market are safe and respectful of fundamental rights and Union values;
  • Ensure legal certainty to facilitate investment and innovation in AI;
  • Enhance governance and enforcement of the law on fundamental rights and safety requirements that apply to AI systems;
  • To facilitate the development of safe and trustworthy AI applications and prevent market fragmentation.

The proposed rules would be enforced through a system at the Member States’ level with a cooperating mechanism at the Union level with the establishment of a European Artificial Intelligence Board. Other measures are proposed to reduce the regulatory burden and support innovation in Small and Medium-sized Enterprises and startups. This proposal is coherent “with the Commission’s overall digital strategy in its contribution to promoting technology that works for people, one of the three main pillars of the policy orientation and objectives announced in the Communication Shaping Europe’s digital future.” The AI proposal is closely linked to the Data Governance Act and the Open Data Directive, which will establish mechanisms and services for using, sharing, and pooling data that are crucial for developing data-driven AI.


 

Definition of Artificial Intelligence under the Act

The proposal doesn’t provide any definition of AI. However, instead, it defines AI systems. AI system (Article 3(1)) is defined as “ software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.” The definition is expansive and is mainly derived from the Organization for Economic Cooperation and Development’s definition. However, the Council in November 2021 included a modified definition of AI that would narrow the scope to machine learning. 

A Risk-based approach 

The AI act classifies AI systems into four (4) risk-based divisions wherein the higher risk levels would have to comply with additional safeguards mechanisms. AI systems that represent unacceptable risks are prohibited, and high-risk systems must comply with specific requirements. Low-risk systems must comply with some or no requirements. The four types of risks are as follows:

  1. Unacceptable Risks: Prohibited AI Practices – Article 5 of the AI Act bans harmful AI practices that are considered to be a threat to peoples’ safety, rights, and livelihood due to the unacceptable risks they create. For instance: 
  • AI systems that deploy harmful manipulative ‘subliminal techniques’
  • AI systems that exploit specific vulnerable groups (physical or mental disability)
  • Social scoring used by public authorities
  • Real-time remote biometric identification systems used by law enforcement, except in limited circumstances.
  1. High risk: Regulated high-risk AI systems – Article 6 of the AI Act regulates AI systems that create a high risk to safety and fundamental rights but do not fall under the ‘unacceptable risks’ category. Further, the draft expands on these two categories:
  • AI systems are deployed in eight specific fields, as stated in Annex III, which the commission is authorized to update as it deems fit (Article 7). These include biometric identification and categorization of natural persons; educational and vocational training; management of critical infrastructure; law enforcement; administration of justice and democratic processes; and border control management. 

Furthermore, the providers of high-risk AI systems would have to register their systems in the EU database before deploying AI in the market or putting them into service. Providers of AI systems not governed by the AI Act would have to conduct their conformity assessment to prove that they comply with the requirements for high-risk AI systems. High-risk AI systems also have to comply with additional requirements such as risk-management, testing, data training, and data governance, human oversight, transparency, and cybersecurity. 

  1. Limited risk: transparency obligations – AI systems with limited risk, such as chatbots, emotion recognition systems, and deepfakes, would be subjected to a limited set of transparency obligations.
  2. Low or minimal risk: no obligations – AI systems that present minimal risk can be used in the EU without additional legal obligations. However, the Act creates codes of conduct to encourage providers to apply the mandatory requirements for high-risk AI systems voluntarily.

The risk-based approach focuses on ‘organizing AI practices and systems based on risk level.’ By classifying different AI systems by risk level, the European Commission seems to be focused on managing AI risks and can be seen as a risk manager. Scholars believe that a large part of the Proposal is phrased in the language of risk management. For instance, the explanatory memorandum mentions that the risks should be ‘calculated taking into account the impact on rights and safety. Additionally, the Proposal intends to tailor the regulations to the ‘intensity and scope of the risks that AI systems can generate.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

    Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

  • Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

    Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

  • Exploring the under-explored areas in teaching tech ethics today

    Exploring the under-explored areas in teaching tech ethics today

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

  • Knowledge, Workflow, Oversight: A framework for implementing AI ethics

    Knowledge, Workflow, Oversight: A framework for implementing AI ethics

  • AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

    AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

  • Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications ...

    Can an AI be sentient? Cultural perspectives on sentience and on the potential ethical implications ...

  • Aging with AI: Another Source of Bias?

    Aging with AI: Another Source of Bias?

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.