🔬 Research summary by Abhishek Gupta (@atg_abhishek), our Founder, Director, and Principal Researcher.
[Original paper by Marietje Schaake]
Overview: With the recently released Artificial Intelligence Act in the EU, a lively debate has erupted around what this means for different AI applications, companies building these systems, and more broadly, the future of innovation and regulation. Schaake provides an excellent overview of the Act with an analysis of the implications and sentiments around this Act, including global cooperation between different regions of the world like the US and EU.
Introduction
Just as we had a scramble in the wake of the GDPR in 2018 as companies rushed to become compliant, the announcement of the AI Act has triggered a frenzy amongst organizations to find ways to become compliant while maintaining their ability to innovate. The current paradigm of AI applications incentivizes more invasive data collection to power these applications while providing recommendations, decisions, and influencing people’s lives in more and more significant ways.
The policy brief provides a quick overview of the definition of AI used in the AI Act, which kinds of applications it applies to (high-risk), what high-risk means, some banned use cases, some exceptions to those banned use cases, what conformity assessments are, the implications of the AI Act on the rest of the world, and how civil society and other organizations have reacted to the Act. There are mixed reactions, but Schaake concludes on an optimistic note that the Act can become a rallying point to achieve more consistency in cybersecurity and other practices in addition to AI development across the world. We shouldn’t treat the harms from AI systems as inevitable.
The definition of AI utilized in the Act follows an interesting path of using a broad, overarching definition with some specifically defined categories and use cases. This hybrid approach is supplemented by the power to amend these definitions as we go along to make them more compatible with technical and sociological developments in the future. This will be critical for the continued applicability of the Act, which is lacking in many other proposed regulations that are either too vague or too specific.
Risk and unacceptable uses
The central operating mechanism of the Act is to look at high-risk AI uses-cases which include biometric identification, critical infrastructure that can significantly impact human lives, determining access to education and employment, worker management, access to private and public services (e.g., finance), law enforcement, migration and immigration, and administration of justice and democratic processes. Article 7(2) gives more details on how to make these assessments. For such high-risk systems, they cannot be released to the public before undergoing a conformity assessment which determines whether all the needs of the AIA risk framework have been met.
Distorting human behavior, exploiting vulnerabilities of marginalized groups, social scoring, and real-time biometric identification in public spaces (except in certain circumstances like those mandated by national law, or for tracking terrorist activities, searching for missing persons, etc.) are prohibited use cases.
Complying with the AIA
Articles 9 through 15 of the AIA provide guidance on how to comply with the Act and include practices like maintaining a risk management system, data governance and management, transparency via constantly updated documentation of the high-risk AI system, logging and traceability through the AI system, appropriate human oversight, and balancing accuracy of the system with other desired properties like robustness and explainability of the system. Some of these requirements will sound familiar to those who had worked in compliance before and helped their organizations transition into the GDPR era. Others emerge from best practices in the MLOps domain as well. A combined policy and technical approach is the way forward to build AIA-compliant systems. This will help in meeting the post-market monitoring requirements as proposed in the AIA.
We can expect there to be some intense lobbying from different corporations and other organizations to tailor the AIA to align better with their needs. Standard-setting organizations will become more potent through economic, legal, and political levers, and we must account for the potential power imbalances that occur through this channel. Finally, through the Brussels effect, we will potentially see a more positive change in the attitude towards building more ethical, safe, and inclusive AI systems worldwide.
Between the lines
In line with the work done at the Montreal AI Ethics Institute in creating research summaries, such policy briefs provide a great avenue to catch up on pertinent issues without diving into all the details until needed. These are especially valuable for those impacted by policy and technical changes in the field but might lack the time and resources to parse through the fast-moving field. The next step in making such pieces more actionable is to analyze case studies. In the case of the AI Act, it would be great to see how this impacts currently deployed high-risk AI systems, what that means for the process, and technical changes required to make these systems conform with the requirements to be allowed deployment in the field. Companies that are fast to act on these compliance requirements will surely gain a competitive market edge, essentially mimicking the changes during the transition to the GDPR era.