• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Regulating Artificial Intelligence: The EU AI Act – Part 1 (i)

June 16, 2023

✍️ Article by Avantika Bhandari, SJD. Her research areas cover indigenous knowledge and its protection, human rights, and intellectual property rights.


Overview: Since the original formulation and publication of the EU AI Act, the Commission has introduced certain amendments to the EU AI Act (The Act). The Commission’s proposal aims to ensure that AI deployed in the market and used is safe and respects the law and fundamental rights, ensuring legal certainty to facilitate innovation and investment in AI. 


This article captures some of the proposed elements of the Act. 

  1. Narrowing the definition of an AI system: The narrow AI system now includes systems that are developed through machine learning approaches and logic-knowledge – based approaches to distinguish AI systems from more classical software. To ensure that the Act remains flexible with changing times, “a possibility to adopt implementing acts to specify further and update techniques under machine learning approaches and logic- and knowledge-based approaches has been added in Article 4.”
  1. Prohibited AI Practices: The proposal extends the prohibition of using AI for social scoring to private sectors. Additionally, the proposal extends the use of AI systems that exploit the vulnerabilities of a specific group of persons and also covers persons who are vulnerable due to social or economic situations.
  1. Change in high-risk AI use (Annex III): The proposal deletes three (3) AI systems from the list (deep fake detection by law enforcement authorities, crime analytics, and verification of the authenticity of travel documents). It, however, adds two (critical digital infrastructure and health insurance) in the categories of high-risk AI use. Article 7(1) is modified to provide the possibility to add high-risk use cases and their deletion. The proposal includes an additional ‘horizontal layer’ on top of the high-risk classification to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured.
  1. General Purpose AI: The proposal creates a new Title IA for general-purpose AI systems for situations where they may become high-risk. It also specifies that certain requirements for high-risk systems would also apply to general-purpose AI systems. There is a possibility for adopting further implementing acts that delve into the modalities of cooperation between providers of general-purpose AI systems and other providers. The new provisions call for providers of general-purpose AI systems to cooperate and provide information to those providers who intend to place their systems on the Union market as high-risk AI systems to comply with their obligations. This cooperation between providers shall preserve intellectual property rights and trade secrets. 
  1. Law Enforcement Purposes: Some changes have been made to using AI systems considering the ‘particular specificities’ for law enforcement purposes. For instance, regarding remote biometric systems and real-time biometric identification systems that were originally prohibited, the proposal has widened the exceptions to the prohibition, allowing law enforcement authorities to use such systems where it is strictly necessary for law enforcement purposes. 
  1. Transparency: The proposal heightens the transparency provisions concerning the use of high-risk AI systems. Article 51 has been amended to incorporate certain users (public authorities, agencies, or bodies) who will be liable for registering in the EU database for high-risk AI systems. Furthermore, a natural or legal person who has reasons to consider that there has been an infringement of the provisions of the AI Act may file a complaint to the market surveillance authority.
  1. Supporting Innovation: To promote technological innovation, new provisions have been added that allow unsupervised real-world testing of AI systems (Articles 54a and 54b) under specific circumstances. Furthermore, the AI sandboxes that are supposed to establish a controlled environment for the testing and development of AI systems under direct supervision by the national competent authorities should also allow for the testing of innovative AI systems in real-world scenarios.
  1. Protection of SMEs: The proposal gives regulatory relief to SMEs and start-ups. The obligations and requirements for General purpose AI systems shall not apply to micro- and SMEs. Furthermore, to alleviate the administrative burden for smaller companies, the new proposal includes a list of actions to be undertaken by the commission to support such operators (Article 55). 

The European Parliament has voted on the amended AI Act. The adoption of the ‘general approach’ will enable the Council to enter negotiations with the European Parliament. If everything goes smoothly and as per the plan, the ambitious AI Act could be officially adopted by 2024. Like the General Data Protection Regulation (GDPR), the world’s first Artificial Intelligence law would have an extraterritorial effect (Brussels Effect).

References

  1. https://data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf
  2. https://www.insideprivacy.com/artificial-intelligence/eu-ai-policy-and-regulation-what-to-look-out-for-in-2023
  3. https://www.europarl.europa.eu/legislative-train/theme-a-europe-fit-for-the-digital-age/file-regulation-on-artificial-intelligence
  4. https://www.consilium.europa.eu/en/press/press-releases/2022/12/06/artificial-intelligence-act-council-calls-for-promoting-safe-ai-that-respects-fundamental-rights/
  5. https://www.goodwinlaw.com/en/insights/publications/2023/05/alerts-technology-aiml-eu-ai-act
Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

  • Can Chatbots Replace Human Mental Health Support?

    Can Chatbots Replace Human Mental Health Support?

  • We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

    We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

    AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

  • AI Policy Corner: AI for Good Summit 2025

    AI Policy Corner: AI for Good Summit 2025

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.