🔬 Research Summary by Manuel Wörsdörfer, an Assistant Professor of Management and Computing Ethics at the Maine Business School and School of Computing and Information Science at the University of Maine.
[Original paper by Manuel Wörsdörfer]
Overview: In light of the rise of generative AI and recent debates about the socio-political implications of large-language models and chatbots, this article investigates the E.U.’s Artificial Intelligence Act (AIA), the world’s first comprehensive attempt by a government body to address and mitigate the potential negative impacts of AI technologies. The article, in particular, analyzes the AIA’s strengths and weaknesses and proposes reform measures that could help to strengthen it.
Introduction
The rise of generative AI – including chatbots and the underlying large-language models – has sparked debates about AI’s socio-economic and political implications. Critics point out that these technologies could exacerbate the spread of disinformation, deception, fraud, and manipulation, amplify discrimination risks and biases, trigger hate crimes, lead to more surveillance, and further undermine trust in democracy and the rule of law.
As these discussions show, there is a growing need for some form of AI regulation, and several governments have already begun taking initial measures. For instance, the Competition and Markets Authority (CMA) in the U.K. has launched an investigation examining AI foundation models; the White House has released a ‘Blueprint for an AI Bill of Rights’ and an executive order on ‘Ensuring Safe, Secure, and Trustworthy AI.’ The boldest move so far has been made by the European Commission, which released its Artificial Intelligence Act (AIA) proposal in 2021, followed by extensive deliberations in the Council of the European Union and European Parliament in 2022 and the approval of the revised draft by the Parliament in early summer of 2023.
Key Insights
The Artificial Intelligence Act
The AIA’s primary goal is to create a legal framework for secure, trustworthy, and ethical AI. It aims to ensure that AI technologies are human-centric, safe to use, compliant with the law, and respectful of fundamental rights, especially those enshrined in the Charter of Fundamental Rights. The AIA results from extensive stakeholder consultation, incorporates input from the High-Level Expert Group on AI, and builds on the E.U. White Paper and other AI ethics initiatives. It is part of the E.U.’s digital single market strategy, complements the General Data Protection Regulation (GDPR), and is consistent with the Digital Services Act (DSA), the Digital Markets Act (DMA), and other regulatory initiatives such as the Data Governance Act, the revised AI Liability Directive, and sectoral product safety frameworks.
The AIA sorts AI systems into different risk categories – unacceptable risk, high risk, and limited or minimal risk categories. The higher the risk, the more regulations apply to those technologies. It denies market access whenever the risks are deemed too high for risk-mitigating interventions. For high-risk AI systems, market access is granted if they comply with the AIA. This includes ex-ante technical requirements, such as risk management systems and certification, and an ex-post market monitoring procedure. Minimal-risk systems must fulfill general safety requirements, such as those in the General Product Safety Directive.
In short, the AIA defines prohibited AI practices – including social scoring – and high-risk AI systems, that is, systems that pose significant risks to the health and safety of individuals or negatively affect the fundamental rights of persons. Chatbots and deepfakes would be considered high-risk based on the Parliament’s recommendations. It also defines areas where real-time facial recognition is allowed, restricted, and prohibited, imposing transparency and other obligations on high-risk AI technologies.
Strengths and Weaknesses
With the AIA, the E.U. recognizes the adverse effects of AI technologies on fundamental rights and that voluntary self-regulation offers inadequate protection. The Commission thus changed course – from soft to hard law – and reversed its previous strategy, which was based on the recommendations of the High-Level Expert Group (HLEG) and the White Paper. The AIA also attempts to foster innovation – e.g., with the so-called regulatory sandboxes for small and start-up companies – while at the same time protecting humans, the rule of law, and democracy. Lastly, it tries to create a level playing field of protection across E.U. member states and prioritizes respect for human rights, including health and safety.
Among the AIA’s strengths is its legally binding, i.e., hard law character, which marks a welcoming departure from existing voluntary or self-regulatory AI ethics initiatives. Other positive aspects include the AIA’s extra-territoriality and possible extension of the ‘Brussels Effect,’ the ability to address data quality and discrimination risks, and institutional innovations such as the European Artificial Intelligence Board (EAIB) and publicly accessible logs and a European database for AI systems, as an essential step in opening black-box algorithms. Yet from an AI ethics perspective, the AIA falls short of realizing its full potential: Experts are primarily concerned with the AIA’s proposed governance structure; they specifically criticize its:
- Lack of effective enforcement (i.e., over-reliance on provider self-assessment and monitoring and existence of discretionary leeway among standardization bodies).
- Lack of adequate oversight and control mechanisms (i.e., inadequate stakeholder consultation and participation, existing power asymmetries, lack of transparency, and consensus-finding problems during the standardization procedure).
- Lack of procedural rights (i.e., complaint and remedy mechanisms).
- Lack of worker protection (i.e., possible undermining of employee rights, especially those exposed to AI-powered workplace monitoring).
- Lack of institutional clarity (i.e., lack of coordination and clarification of competencies of oversight institutions).
- Lack of sufficient funding and staffing (i.e., underfunding and understaffing of market surveillance authorities).
- Lack of consideration of environmental sustainability issues given AI’s significant energy requirements (i.e., lack of mandating ‘green AI’ or ‘sustainable AI’).
Reform Measures
Several reform measures must be taken to address these issues, such as introducing or strengthening:
- Conformity assessment procedures: The AIA needs to move beyond the currently flawed system of provider self-assessment and certification towards mandatory third-party audits for all high-risk AI systems. The existing governance regime, which involves a significant degree of discretion for self-assessment and certification for AI providers and technical standardization bodies, needs to be replaced with legally mandated external oversight by an independent regulatory agency with appropriate investigatory and enforcement powers.
- Democratic accountability and judicial oversight: What is also needed is a meaningful engagement of all affected groups, including consumers and social partners (e.g., unions and workers exposed to AI systems), and a public representation in the context of standardizing and certifying AI technologies. The overall goal is to ensure that those with less bargaining power are included and their voices are heard.
- Redress and complaint mechanisms: Besides consultation and participation rights, experts also request the inclusion of explicit information rights, easily accessible, affordable, and effective legal remedies, and individual and collective complaint and redress mechanisms. That is, bearers of fundamental rights must have means to defend themselves if they feel they have been adversely impacted by AI systems or treated unlawfully. AI subjects must be able to challenge the outcomes of such systems legally.
- Worker protection: Experts demand better involvement and protection of workers and their representatives in using AI technologies. This could be achieved by classifying more AI at-work systems as high-risk. Workers should also be able to participate in management decisions regarding using AI tools in the workplace. Their voices and concerns should be heard, especially when technologies that might negatively impact their work experience are introduced. Moreover, workers should have the right to object to using specific AI tools in the workplace and be able to file complaints.
- Governance structure: Effective enforcement of the AIA also hinges on strong institutions and ‘ordering powers.’ The EAIB has the potential to be such a power and strengthen AIA oversight and supervision. This, however, requires that it has the corresponding capacity, technological and human rights expertise, resources, and political independence. To ensure adequate transparency, the E.U.’s AI database should include high-risk systems and all forms of AI technologies. Moreover, it should list all systems used by private and public entities. The material provided to the public should include information regarding algorithmic risk and human rights impact assessment. This data should be available to those affected by AI systems in an easily understandable and accessible format.
- Funding and staffing of market surveillance authorities: Besides the EAIB and AI database, national authorities must be strengthened – both financially and expertise-wise. It is worth noting that the 25 full-time equivalent positions foreseen by the AIA for national supervisory authorities are insufficient and that additional financial and human resources must be invested in regulatory agencies to implement the proposed AI regulation effectively.
- Sustainability considerations: To better address the adverse external effects and environmental concerns of AI systems, experts also demand the inclusion of sustainability requirements for AI providers, e.g., obliging them to reduce the energy consumption and e-waste of AI technologies, thereby moving towards ‘green AI.’ Ideally, those requirements should be mandatory and go beyond the existing voluntary codes of conduct.
Between the lines
The challenges of generative AI and its underlying large-language models (LLMs) will likely necessitate additional AI regulation. The AIA will need to be revised and updated regularly. Future work in AI ethics and regulation needs to be vigilant of these developments, and the Commission – and other governing bodies – must incorporate a system allowing them to amend the AIA as we adapt to new AI advancements and learn from the successes and mistakes of our regulatory interventions.