🔬 Research summary by Benjamin Cedric Larsen, a PhD Fellow at Copenhagen Business School researching questions related to AI ethics and compliance.
[Original paper by Jakob Mökander, Maria Axente, Federico Casolari, and Luciano Floridi.]
Overview: The proposed European Artificial Intelligence Act (AIA) is likely to become an important reference point that establishes precedence in terms of how AI systems can be regulated. The two primary enforcement mechanisms proposed in the AIA, have been little studied, however. These consist of conformity assessments that providers of high-risk AI systems are expected to conduct, as well as post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. This summary provides a brief overview of both mechanisms.
The proposed European Artificial Intelligence Act (AIA) is expected to go into effect starting from 2023. The intended enforcement mechanisms of the AIA, as well as its proposed institutional structure, remain little understood, however. Mökander et. al. take a deep dive into the intended workings of the AIA and argue that the regulation can be interpreted as a proposal to establish a Europe-wide ecosystem for AI auditing.
The paper offers two main contributions. First, the enforcement mechanisms as laid out in the AIA are translated into two distinct forms of AI auditing. This adds clarity to the requirements of the AIA, as well as to the intended institutional structure of the initiative. The paper concludes by highlighting seven aspects of the AIA where further clarifications would be helpful.
A new process for AI auditing
Pre-market conformity assessment
The AIA clusters AI systems into three levels of risk across unacceptable risk, high risk, and little or no risk. The governance requirements differ between the three risk levels while AI systems that are considered to pose an unacceptable risk – are outright banned. This includes AI systems that can be used for general-purpose social scoring as well as real-time remote biometric identification for law enforcement, for example.
AI systems that pose little or no risk are not subject to any interventions stipulated in the AIA, whereas high-risk AI systems will be subject to strict obligations before they may enter the Central European market. While the majority of AI systems are expected to fall into the low-risk category the requirements for high-risk AI systems are more elaborate. These, for example, include the establishment of a risk management system, the identification, and mitigation of known and foreseeable risks, as well as adequate testing and validation
High-risk AI systems are only permitted to operate on the European market if they have been subjected to a pre-market conformity assessment. Once a high-risk AI system has demonstrated conformity with the AIA, it then receives a standardized CE marking, where-after it can be freely deployed on the EU market.
Today, many high-risk AI systems are already subject to third-party conformity assessments e.g. under current product safety law. These include AI systems that are incorporated into medical devices or toys, for instance. In these cases, the requirements set out in the AIA will be integrated into existing sectoral safety legislation to avoid duplication of administrative burdens.
High-risk AI systems that do not fall into this category, however, are referred to as stand-alone systems that are subject to a different set of requirements. Stand-alone systems have two options for how to conduct pre-market conformity assessments. They can either conduct internal conformity assessments, which is equivalent to performing an internal audit, or may involve a third-party auditor that conducts an assessment of the AI system or product before it is expected to enter the European market.
In addition to the pre-market conformity assessments, providers of high-risk AI systems are also expected to establish and document post-market monitoring systems. The task of post-market monitoring is to document and analyze the behavior and performance of a high-risk AI system after its implementation and during the entire span of its operation.
Post-market assessments complement the pre-market CE certifications since providers of high-risk AI systems are expected to report on any serious incident or malfunctioning that constitutes a breach of EU law. Post‑market monitoring seeks to ensure that providers take immediate corrective actions to bring an AI system under conformity or withdraw it completely from the market.
To detect, report on, and address system failures in effective and systematic ways, providers must draft post-market monitoring plans that account for the intended nature and functioning of their AI systems. The post-market monitoring plan is therefore complementary to the conformity assessment because it is partially based on an evaluation of the AI system before it is implemented.
The emergence of a new EU auditing ecosystem
According to the AIA, the ultimate responsibility to ensure compliance rests with the providers and users of high-risk AI systems. However, to ensure regulatory oversight, the Commission proposes to set up a governance structure that spans both the European Union as well as all of its members at the national level. At a Union level, a European Artificial Intelligence Board will be established to collect and share best practices among member states and to issue recommendations on uniform administrative practices. The European Artificial Intelligence Board is conceived as a coordinating structure where Member States and Commission representatives are gathered to discuss best-practice while facilitating the actual implementation of the AIA.
At a national level, member states are expected to designate a national authority to supervise the application and implementation of the AIA. The national supervisory authority is not expected to conduct any conformity assessments itself, but will instead act to designate third-party organizations that have developed the capacity to conduct pre-market conformity assessments of providers of high-risk AI systems. To become an assessment body, an organization would have to apply for notification with the national supervisory authority of the member state in which it is established.
Seven recommendations for improving the AI Act
The paper concludes by highlighting seven areas where further guidance on the AIA is needed. These are:
1. Level of abstraction. AIA should provide further guidance and more detail on applicable industry standards and evaluation metrics for AI auditing.
2. Material scope. A more concise scope would help providers of AI systems, third-party auditors, as well as national authorities to direct their resources more effectively.
3. Conceptual precision. Further guidance is needed regarding the kinds of distortions the AIA refers to as prohibited.
4. Procedural guidance. Many details concerning how pre-market Conformity Assessments and post‑market Monitoring should be conducted in practice have not yet been clarified. This makes it hard for companies to prepare in terms of developing new audit-related practices.
5. Institutional mandate. The role and mandate of the European Artificial Intelligence Board remains unclear.
6. Resolving tensions. Further guidance could be provided on how to resolve tensions between conflicting values, such as accuracy and privacy, as well as on how to prioritize between conflicting definitions of normative concepts, like fairness, in different situations.
7. Checks and balances. How providers ensure compliance with the AIA is not disclosed to the public, which could result in a lack of checks and balances that ensures AI systems are robust and ethical.
Between the lines
The risk-based approach outlined in the AIA is promising as it begins to shift the focus from AI application to AI regulation. Going forward, this means that it will be less important to label a specific technical system ‘AI’ and more important to scrutinize the normative ends for which the system is employed.
As normative interpretations tend to differ at the international level, however, this opens up for new discussions on how regional forms of horizontal regulation are likely to extend into the international sphere. The AI Act, for example, explicitly bans general-purpose social scoring as well as real-time remote biometric identification for law enforcement, which, however, are AI technologies that already are being widely implemented in China.
As new and differing horizontal regulations begin to emerge, it is important to think about international alignment on AI regulation. This includes evaluating how normative and socio-technological differences in terms of AI implementation could be mitigated at the international level.