• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Regulating Artificial Intelligence: The EU AI Act – Part 1 (i)

June 16, 2023

✍️ Article by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.


Overview: Since the original formulation and publication of the EU AI Act, the Commission has introduced certain amendments to the EU AI Act (The Act). The Commission’s proposal aims to ensure that AI deployed in the market and used is safe and respects the law and fundamental rights, ensuring legal certainty to facilitate innovation and investment in AI. 


This article captures some of the proposed elements of the Act. 

  1. Narrowing the definition of an AI system: The narrow AI system now includes systems that are developed through machine learning approaches and logic-knowledge – based approaches to distinguish AI systems from more classical software. To ensure that the Act remains flexible with changing times, “a possibility to adopt implementing acts to specify further and update techniques under machine learning approaches and logic- and knowledge-based approaches has been added in Article 4.”
  1. Prohibited AI Practices: The proposal extends the prohibition of using AI for social scoring to private sectors. Additionally, the proposal extends the use of AI systems that exploit the vulnerabilities of a specific group of persons and also covers persons who are vulnerable due to social or economic situations.
  1. Change in high-risk AI use (Annex III): The proposal deletes three (3) AI systems from the list (deep fake detection by law enforcement authorities, crime analytics, and verification of the authenticity of travel documents). It, however, adds two (critical digital infrastructure and health insurance) in the categories of high-risk AI use. Article 7(1) is modified to provide the possibility to add high-risk use cases and their deletion. The proposal includes an additional ‘horizontal layer’ on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.
  1. General Purpose AI: The proposal creates a new Title IA for general-purpose AI systems for situations where they may become high-risk. It also specifies that certain requirements for high-risk systems would also apply to general-purpose AI systems. There is a possibility for adopting further implementing acts that delve into the modalities of cooperation between providers of general-purpose AI systems and other providers. The new provisions call for providers of general-purpose AI systems to cooperate and provide information to those providers who intend to place their systems on the Union market as high-risk AI systems to comply with their obligations. This cooperation between providers shall preserve intellectual property rights and trade secrets. 
  1. Law Enforcement Purposes: Some changes have been made to using AI systems considering the ‘particular specificities’ for law enforcement purposes. For instance, regarding remote biometric systems and real-time biometric identification systems that were originally prohibited, the proposal has widened the exceptions to the prohibition, allowing law enforcement authorities to use such systems where it is strictly necessary for law enforcement purposes. 
  1. Transparency: The proposal heightens the transparency provisions concerning the use of high-risk AI systems. Article 51 has been amended to incorporate certain users (public authorities, agencies, or bodies) who will be liable for registering in the EU database for high-risk AI systems. Furthermore, a natural or legal person who has reasons to consider that there has been an infringement of the provisions of the AI Act may file a complaint to the market surveillance authority.
  1. Supporting Innovation: To promote technological innovation, new provisions have been added that allow unsupervised real-world testing of AI systems (Articles 54a and 54b) under specific circumstances. Furthermore, the AI sandboxes that are supposed to establish a controlled environment for the testing and development of AI systems under direct supervision by the national competent authorities should also allow for the testing of innovative AI systems in real-world scenarios.
  1. Protection of SMEs: The proposal gives regulatory relief to SMEs and start-ups. The obligations and requirements for General purpose AI systems shall not apply to micro- and SMEs. Furthermore, to alleviate the administrative burden for smaller companies, the new proposal includes a list of actions to be undertaken by the commission to support such operators (Article 55). 

The European Parliament has voted on the amended AI Act. The adoption of the ‘general approach’ will enable the Council to enter negotiations with the European Parliament. If everything goes smoothly and as per the plan, the ambitious AI Act could be officially adopted by 2024. Like the General Data Protection Regulation (GDPR), the world’s first Artificial Intelligence law would have an extraterritorial effect (Brussels Effect).

References

  1. https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf
  2. https://www.insideprivacy.com/artificial-intelligence/eu-ai-policy-and-regulation-what-to-look-out-for-in-2023
  3. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence
  4. https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/
  5. https://www.goodwinlaw.com/en/insights/publications/2023/05/alerts-technology-aiml-eu-ai-act
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Responsible AI Licenses: social vehicles toward decentralized control of AI

    Responsible AI Licenses: social vehicles toward decentralized control of AI

  • The state of the debate on the ethics of computer vision

    The state of the debate on the ethics of computer vision

  • Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

    Jake Elwes: Constructing and Deconstructing Gender with AI-Generated Art

  • If It's Free, You're the Product: The New Normal in a Surveillance Economy

    If It's Free, You're the Product: The New Normal in a Surveillance Economy

  • Data Pooling in Capital Markets and its Implications

    Data Pooling in Capital Markets and its Implications

  • Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

    Should AI-Powered Search Engines and Conversational Agents Prioritize Sponsored Content?

  • Challenges of AI Development in Vietnam: Funding, Talent and Ethics

    Challenges of AI Development in Vietnam: Funding, Talent and Ethics

  • Computer vision and sustainability

    Computer vision and sustainability

  • Sociological Perspectives on Artificial Intelligence: A Typological Reading

    Sociological Perspectives on Artificial Intelligence: A Typological Reading

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.