• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

June 29, 2021

🔬 Research summary by Abhishek Gupta (@atg_abhishek), our Founder, Director, and Principal Researcher.

[Original paper by Marietje Schaake]


Overview: With the recently released Artificial Intelligence Act in the EU, a lively debate has erupted around what this means for different AI applications, companies building these systems, and more broadly, the future of innovation and regulation. Schaake provides an excellent overview of the Act with an analysis of the implications and sentiments around this Act, including global cooperation between different regions of the world like the US and EU.


Introduction

Just as we had a scramble in the wake of the GDPR in 2018 as companies rushed to become compliant, the announcement of the AI Act has triggered a frenzy amongst organizations to find ways to become compliant while maintaining their ability to innovate. The current paradigm of AI applications incentivizes more invasive data collection to power these applications while providing recommendations, decisions, and influencing people’s lives in more and more significant ways. 

The policy brief provides a quick overview of the definition of AI used in the AI Act, which kinds of applications it applies to (high-risk), what high-risk means, some banned use cases, some exceptions to those banned use cases, what conformity assessments are, the implications of the AI Act on the rest of the world, and how civil society and other organizations have reacted to the Act. There are mixed reactions, but Schaake concludes on an optimistic note that the Act can become a rallying point to achieve more consistency in cybersecurity and other practices in addition to AI development across the world. We shouldn’t treat the harms from AI systems as inevitable. 

The definition of AI utilized in the Act follows an interesting path of using a broad, overarching definition with some specifically defined categories and use cases. This hybrid approach is supplemented by the power to amend these definitions as we go along to make them more compatible with technical and sociological developments in the future. This will be critical for the continued applicability of the Act, which is lacking in many other proposed regulations that are either too vague or too specific. 

Risk and unacceptable uses

The central operating mechanism of the Act is to look at high-risk AI uses-cases which include biometric identification, critical infrastructure that can significantly impact human lives, determining access to education and employment, worker management, access to private and public services (e.g., finance), law enforcement, migration and immigration, and administration of justice and democratic processes. Article 7(2) gives more details on how to make these assessments. For such high-risk systems, they cannot be released to the public before undergoing a conformity assessment which determines whether all the needs of the AIA risk framework have been met. 

Distorting human behavior, exploiting vulnerabilities of marginalized groups, social scoring, and real-time biometric identification in public spaces (except in certain circumstances like those mandated by national law, or for tracking terrorist activities, searching for missing persons, etc.) are prohibited use cases. 

Complying with the AIA

Articles 9 through 15 of the AIA provide guidance on how to comply with the Act and include practices like maintaining a risk management system, data governance and management, transparency via constantly updated documentation of the high-risk AI system, logging and traceability through the AI system, appropriate human oversight, and balancing accuracy of the system with other desired properties like robustness and explainability of the system. Some of these requirements will sound familiar to those who had worked in compliance before and helped their organizations transition into the GDPR era. Others emerge from best practices in the MLOps domain as well. A combined policy and technical approach is the way forward to build AIA-compliant systems. This will help in meeting the post-market monitoring requirements as proposed in the AIA. 

We can expect there to be some intense lobbying from different corporations and other organizations to tailor the AIA to align better with their needs. Standard-setting organizations will become more potent through economic, legal, and political levers, and we must account for the potential power imbalances that occur through this channel. Finally, through the Brussels effect, we will potentially see a more positive change in the attitude towards building more ethical, safe, and inclusive AI systems worldwide. 

Between the lines

In line with the work done at the Montreal AI Ethics Institute in creating research summaries, such policy briefs provide a great avenue to catch up on pertinent issues without diving into all the details until needed. These are especially valuable for those impacted by policy and technical changes in the field but might lack the time and resources to parse through the fast-moving field. The next step in making such pieces more actionable is to analyze case studies. In the case of the AI Act, it would be great to see how this impacts currently deployed high-risk AI systems, what that means for the process, and technical changes required to make these systems conform with the requirements to be allowed deployment in the field. Companies that are fast to act on these compliance requirements will surely gain a competitive market edge, essentially mimicking the changes during the transition to the GDPR era.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

related posts

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

    Towards an Understanding of Developers' Perceptions of Transparency in Software Development: A Preli...

  • Melting contestation: insurance fairness and machine learning

    Melting contestation: insurance fairness and machine learning

  • Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

    Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

  • A Lesson From AI: Ethics Is Not an Imitation Game

    A Lesson From AI: Ethics Is Not an Imitation Game

  • Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

    Facial Recognition - Can It Evolve From A “Source of Bias” to A “Tool Against Bias”

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

    Research summary: Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Lea...

  • Governance by Algorithms (Research Summary)

    Governance by Algorithms (Research Summary)

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.