• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

June 29, 2021

🔬 Research summary by Abhishek Gupta (@atg_abhishek), our Founder, Director, and Principal Researcher.

[Original paper by Marietje Schaake]


Overview: With the recently released Artificial Intelligence Act in the EU, a lively debate has erupted around what this means for different AI applications, companies building these systems, and more broadly, the future of innovation and regulation. Schaake provides an excellent overview of the Act with an analysis of the implications and sentiments around this Act, including global cooperation between different regions of the world like the US and EU.


Introduction

Just as we had a scramble in the wake of the GDPR in 2018 as companies rushed to become compliant, the announcement of the AI Act has triggered a frenzy amongst organizations to find ways to become compliant while maintaining their ability to innovate. The current paradigm of AI applications incentivizes more invasive data collection to power these applications while providing recommendations, decisions, and influencing people’s lives in more and more significant ways. 

The policy brief provides a quick overview of the definition of AI used in the AI Act, which kinds of applications it applies to (high-risk), what high-risk means, some banned use cases, some exceptions to those banned use cases, what conformity assessments are, the implications of the AI Act on the rest of the world, and how civil society and other organizations have reacted to the Act. There are mixed reactions, but Schaake concludes on an optimistic note that the Act can become a rallying point to achieve more consistency in cybersecurity and other practices in addition to AI development across the world. We shouldn’t treat the harms from AI systems as inevitable. 

The definition of AI utilized in the Act follows an interesting path of using a broad, overarching definition with some specifically defined categories and use cases. This hybrid approach is supplemented by the power to amend these definitions as we go along to make them more compatible with technical and sociological developments in the future. This will be critical for the continued applicability of the Act, which is lacking in many other proposed regulations that are either too vague or too specific. 

Risk and unacceptable uses

The central operating mechanism of the Act is to look at high-risk AI uses-cases which include biometric identification, critical infrastructure that can significantly impact human lives, determining access to education and employment, worker management, access to private and public services (e.g., finance), law enforcement, migration and immigration, and administration of justice and democratic processes. Article 7(2) gives more details on how to make these assessments. For such high-risk systems, they cannot be released to the public before undergoing a conformity assessment which determines whether all the needs of the AIA risk framework have been met. 

Distorting human behavior, exploiting vulnerabilities of marginalized groups, social scoring, and real-time biometric identification in public spaces (except in certain circumstances like those mandated by national law, or for tracking terrorist activities, searching for missing persons, etc.) are prohibited use cases. 

Complying with the AIA

Articles 9 through 15 of the AIA provide guidance on how to comply with the Act and include practices like maintaining a risk management system, data governance and management, transparency via constantly updated documentation of the high-risk AI system, logging and traceability through the AI system, appropriate human oversight, and balancing accuracy of the system with other desired properties like robustness and explainability of the system. Some of these requirements will sound familiar to those who had worked in compliance before and helped their organizations transition into the GDPR era. Others emerge from best practices in the MLOps domain as well. A combined policy and technical approach is the way forward to build AIA-compliant systems. This will help in meeting the post-market monitoring requirements as proposed in the AIA. 

We can expect there to be some intense lobbying from different corporations and other organizations to tailor the AIA to align better with their needs. Standard-setting organizations will become more potent through economic, legal, and political levers, and we must account for the potential power imbalances that occur through this channel. Finally, through the Brussels effect, we will potentially see a more positive change in the attitude towards building more ethical, safe, and inclusive AI systems worldwide. 

Between the lines

In line with the work done at the Montreal AI Ethics Institute in creating research summaries, such policy briefs provide a great avenue to catch up on pertinent issues without diving into all the details until needed. These are especially valuable for those impacted by policy and technical changes in the field but might lack the time and resources to parse through the fast-moving field. The next step in making such pieces more actionable is to analyze case studies. In the case of the AI Act, it would be great to see how this impacts currently deployed high-risk AI systems, what that means for the process, and technical changes required to make these systems conform with the requirements to be allowed deployment in the field. Companies that are fast to act on these compliance requirements will surely gain a competitive market edge, essentially mimicking the changes during the transition to the GDPR era.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

related posts

  • An Introduction to Corporate Digital Responsibility

    An Introduction to Corporate Digital Responsibility

  • The Evolution of War: How AI has Changed Military Weaponry and Technology

    The Evolution of War: How AI has Changed Military Weaponry and Technology

  • Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

    Survey of EU Ethical Guidelines for Commercial AI: Case Studies in Financial Services

  • The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

    The 28 Computer Vision Datasets Used in Algorithmic Fairness Research

  • Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed E...

  • Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

    Beyond Empirical Windowing: An Attention-Based Approach for Trust Prediction in Autonomous Vehicles

  • The Ethics of Emotion in AI Systems (Research Summary)

    The Ethics of Emotion in AI Systems (Research Summary)

  • The State of AI Ethics Report

    The State of AI Ethics Report

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

    Effects of ROSS Intelligence and NDAS, highlighting the need for AI regulation

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.