• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation

March 2, 2022

🔬 Research summary by Benjamin Cedric Larsen, a PhD Fellow at Copenhagen Business School researching questions related to AI ethics and compliance.

[Original paper by Jakob Mökander, Maria Axente, Federico Casolari, and Luciano Floridi.]


Overview: The proposed European Artificial Intelligence Act (AIA) is likely to become an important reference point that establishes precedence in terms of how AI systems can be regulated. The two primary enforcement mechanisms proposed in the AIA, have been little studied, however. These consist of conformity assessments that providers of high-risk AI systems are expected to conduct, as well as post-market monitoring plans that providers must establish to document the performance of high-risk AI systems throughout their lifetimes. This summary provides a brief overview of both mechanisms.


Introduction

The proposed European Artificial Intelligence Act (AIA) is expected to go into effect starting from 2023. The intended enforcement mechanisms of the AIA, as well as its proposed institutional structure, remain little understood, however. Mökander et. al. take a deep dive into the intended workings of the AIA and argue that the regulation can be interpreted as a proposal to establish a Europe-wide ecosystem for AI auditing.

The paper offers two main contributions. First, the enforcement mechanisms as laid out in the AIA are translated into two distinct forms of AI auditing. This adds clarity to the requirements of the AIA, as well as to the intended institutional structure of the initiative. The paper concludes by highlighting seven aspects of the AIA where further clarifications would be helpful.

A new process for AI auditing

Pre-market conformity assessment

The AIA clusters AI systems into three levels of risk across unacceptable risk, high risk, and little or no risk. The governance requirements differ between the three risk levels while AI systems that are considered to pose an unacceptable risk – are outright banned. This includes AI systems that can be used for general-purpose social scoring as well as real-time remote biometric identification for law enforcement, for example.

AI systems that pose little or no risk are not subject to any interventions stipulated in the AIA, whereas high-risk AI systems will be subject to strict obligations before they may enter the Central European market. While the majority of AI systems are expected to fall into the low-risk category the requirements for high-risk AI systems are more elaborate. These, for example, include the establishment of a risk management system, the identification, and mitigation of known and foreseeable risks, as well as adequate testing and validation

High-risk AI systems are only permitted to operate on the European market if they have been subjected to a pre-market conformity assessment. Once a high-risk AI system has demonstrated conformity with the AIA, it then receives a standardized CE marking, where-after it can be freely deployed on the EU market.

Today, many high-risk AI systems are already subject to third-party conformity assessments e.g. under current product safety law. These include AI systems that are incorporated into medical devices or toys, for instance. In these cases, the requirements set out in the AIA will be integrated into existing sectoral safety legislation to avoid duplication of administrative burdens.

High-risk AI systems that do not fall into this category, however, are referred to as stand-alone systems that are subject to a different set of requirements. Stand-alone systems have two options for how to conduct pre-market conformity assessments. They can either conduct internal conformity assessments, which is equivalent to performing an internal audit, or may involve a third-party auditor that conducts an assessment of the AI system or product before it is expected to enter the European market.

Post-market monitoring

In addition to the pre-market conformity assessments, providers of high-risk AI systems are also expected to establish and document post-market monitoring systems. The task of post-market monitoring is to document and analyze the behavior and performance of a high-risk AI system after its implementation and during the entire span of its operation.

Post-market assessments complement the pre-market CE certifications since providers of high-risk AI systems are expected to report on any serious incident or malfunctioning that constitutes a breach of EU law. Post‑market monitoring seeks to ensure that providers take immediate corrective actions to bring an AI system under conformity or withdraw it completely from the market.

To detect, report on, and address system failures in effective and systematic ways, providers must draft post-market monitoring plans that account for the intended nature and functioning of their AI systems. The post-market monitoring plan is therefore complementary to the conformity assessment because it is partially based on an evaluation of the AI system before it is implemented.

The emergence of a new EU auditing ecosystem

According to the AIA, the ultimate responsibility to ensure compliance rests with the providers and users of high-risk AI systems. However, to ensure regulatory oversight, the Commission proposes to set up a governance structure that spans both the European Union as well as all of its members at the national level. At a Union level, a European Artificial Intelligence Board will be established to collect and share best practices among member states and to issue recommendations on uniform administrative practices. The European Artificial Intelligence Board is conceived as a coordinating structure where Member States and Commission representatives are gathered to discuss best-practice while facilitating the actual implementation of the AIA.

At a national level, member states are expected to designate a national authority to supervise the application and implementation of the AIA. The national supervisory authority is not expected to conduct any conformity assessments itself, but will instead act to designate third-party organizations that have developed the capacity to conduct pre-market conformity assessments of providers of high-risk AI systems. To become an assessment body, an organization would have to apply for notification with the national supervisory authority of the member state in which it is established.

Seven recommendations for improving the AI Act

The paper concludes by highlighting seven areas where further guidance on the AIA is needed. These are:

1.      Level of abstraction. AIA should provide further guidance and more detail on applicable industry standards and evaluation metrics for AI auditing.

2.      Material scope. A more concise scope would help providers of AI systems, third-party auditors, as well as national authorities to direct their resources more effectively.

3.      Conceptual precision. Further guidance is needed regarding the kinds of distortions the AIA refers to as prohibited.

4.      Procedural guidance. Many details concerning how pre-market Conformity Assessments and post‑market Monitoring should be conducted in practice have not yet been clarified. This makes it hard for companies to prepare in terms of developing new audit-related practices.

5.      Institutional mandate. The role and mandate of the European Artificial Intelligence Board remains unclear.

6.      Resolving tensions. Further guidance could be provided on how to resolve tensions between conflicting values, such as accuracy and privacy, as well as on how to prioritize between conflicting definitions of normative concepts, like fairness, in different situations.

7.      Checks and balances. How providers ensure compliance with the AIA is not disclosed to the public, which could result in a lack of checks and balances that ensures AI systems are robust and ethical.

Between the lines

The risk-based approach outlined in the AIA is promising as it begins to shift the focus from AI application to AI regulation. Going forward, this means that it will be less important to label a specific technical system ‘AI’ and more important to scrutinize the normative ends for which the system is employed.

As normative interpretations tend to differ at the international level, however, this opens up for new discussions on how regional forms of horizontal regulation are likely to extend into the international sphere. The AI Act, for example, explicitly bans general-purpose social scoring as well as real-time remote biometric identification for law enforcement, which, however, are AI technologies that already are being widely implemented in China.

As new and differing horizontal regulations begin to emerge, it is important to think about international alignment on AI regulation. This includes evaluating how normative and socio-technological differences in terms of AI implementation could be mitigated at the international level. 

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

    LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

  • Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

    Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

  • Combatting Anti-Blackness in the AI Community

    Combatting Anti-Blackness in the AI Community

  • Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

    Towards Algorithmic Fairness in Space-Time: Filling in Black Holes and Detecting Bias in the Presenc...

  • The Impact of the GDPR on Artificial Intelligence

    The Impact of the GDPR on Artificial Intelligence

  • The Meaning of “Explainability Fosters Trust in AI”

    The Meaning of “Explainability Fosters Trust in AI”

  • Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

    Research summary: Apps Gone Rogue: Maintaining Personal Privacy in an Epidemic

  • CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

    CRUSH: Contextually Regularized and User Anchored Self-Supervised Hate Speech Detection

  • Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

    Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.