• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Harmonizing Artificial Intelligence: The role of standards in the EU AI Regulation

January 18, 2022

🔬 Research summary by Benjamin Cedric Larsen, a PhD Fellow at Copenhagen Business School researching questions related to AI ethics and compliance.

[Original paper by Mark McFadden, Kate Jones, Emily Taylor, Georgia Osborn]


Overview: The EU’s AI Act has envisioned a strong role for technical standards in the governance of AI. Little research has been conducted on what this role looks like, however. This paper by Oxford Information Labs provides an in-depth analysis of the world of technical standards and gives a high-level overview of the EU AI Act’s expected reliance on standards, as well as their perceived strengths and weaknesses in the governance of AI.


Introduction

The EU’s AI Act is a far-reaching attempt to provide a regulatory foundation for the safe, fair, and innovative development of Artificial Intelligence in the European Union. An important feature of the AI Act is standardization as a tool of AI governance. Since many AI standards remain nascent, however, few details on how the EU intends to govern through the use of AI standards have been clarified.

Until now, the EU AI ACT has categorized three levels of risk associated with AI systems across (i) unacceptable risk, (ii) high risk, and (iii) low risk. These categories determine the regulatory consequences for the individual AI systems.

Before the AI ACT is intended to go into effect in 2023, however, harmonized standards as well as supporting guidance and compliance tools need to be developed in order to assist providers and users in complying with the requirements laid out in the AI Act. Conformance with technical standards and common specifications is intended to ensure that providers of high-risk AI stay compliant with the mandatory requirements of the AI Act.

The EU’s AI Act and the case for AI Standards

Technical standards were originally devised to assure safety, quality, and interoperability. For industry players that seek to operate globally, international standards are preferred over national or regional standards because they create a level playing field in markets throughout the world. Older standards focused historically on interoperability as the key benefit to society. Today, many contemporary standards also have social, economic, and political intentions or effects, which influences contemporary norms.

The EU’s new legislative framework (NLF) entails a partnership between legislation and standards, where Article 40 of the AI Act, stipulates that the requirements laid down by the Act, can be covered by complying with officially adopted “harmonized standards”. The European Commission has, in the meantime, mandated that European Standardization Organizations (ESOs) are expected to prepare standards that meet the requirements of forthcoming European legislation, as set out in the AI Act. Once adopted, harmonized standards will be approved by the Commission and then published in the Official Journal of the European Union, which industrial participants then can choose to adopt.

Harmonized standards are viewed as an important building block of the European single market since they complement the requirements of EU legislation with technical requirements for manufacturers that provide products on European common market. The resulting standards are voluntary but tend to be widely adopted because compliance allows suppliers to self-declare that they meet the legal requirements of the legislation and therefore have the right to supply goods to the EU market.

Obstacles to Standardization

While industrial participants, through public consultation, have provided positive feedback to the European approach of the new legislative framework i.e., regulation supported by voluntary harmonized standards, several concerns about the standard-driven approach to AI governance, are highlighted in the paper. These refer to:

Speed and timeliness: the speed and timeliness by which AI standards can be implemented are viewed as an obstacle to their efficiency in terms of governing the more rapid diffusion of AI technologies. Concerns include significant delays in the system, especially in the interval between the conclusion of work on a standard by an ESO and its publication in the Official Journal. In some cases, this means that European standards could lag behind international standards.

Over-reliance on international standards: could cause concern, as there is no guarantee that they comply with EU rights and values. The AI Act, for example, recognizes that the establishment of common normative standards for all high-risk AI systems should be consistent with the Charter of Fundamental Rights, which may not be acknowledged in other regions.

Wide field of risks: standardization surrounding AI technologies and systems have to address a much wider field of risks than other more generic products and systems. AI applications are typically embedded within complex (social) systems, which makes it difficult for the creator of an AI application to predict all of its use-cases as well as how the system potentially could affect Fundamental Rights.Compliance: the current work that is being done on standards, does not place enough focus on adjacent compliance tools for assessing AI products and services against the approved European Standards. Greater focus therefore needs to be placed on the compliance side, as well as what kind of tools, other than standards, that AI users and producers can utilize in order to stay compliant.

Recommendations on the way forward

The paper ends by proposing seven recommendations on the way forward. These include:

(1)   Developing a mechanism that addresses the gap between European Standard Organizations’ available resources and their ongoing ability to develop AI standards.

(2)   A mechanism that ensures broad participation and a focus on human rights in the standards-setting process.

(3)   Devising a fast-track process that aims to improve the timeliness in adoption of AI standards.

(4)   Improved education and training for non-expert AI stakeholders.

(5)   The development of additional compliance tools.

(6)   Better mechanisms to balance European standards that embed European rights and values, as well as continued cooperation with international Standard Developing Organizations that ensures global, open, and interoperable AI standards.

(7)   Inclusion of SMEs in the standard-setting process as well as a minimization of the general costs of engagement.

Between the lines

Technical standards were originally devised to assure safety, quality, and interoperability. For industry players that seek to operate globally, international standards are preferred over national or regional standards because they create a level playing field in markets throughout the world. Older standards focused historically on interoperability as the key benefit to society. Today, many contemporary standards also have social, economic, and political intentions or effects, which often has an influence on contemporary norms. This induces friction between disparate approaches to standard-setting, as well as to the fundamental values that are being baked into new technologies via their standards.

The current politicization of technological standards could therefore have implications for continued measures of interoperability. China’s Standards 2035 strategy, for example, articulates the interplay between domestic standards and the benefits to international trade arising from the legitimacy that international adoption of standards can bring. This means that engineers and scientists who have historically engaged in standardization as a technical process, now may find themselves engaged not only in the commercial process of standard-setting, but also in the underlying strategic geopolitical and societal impacts of new technical standards. How AI standards continue to be developed internationally, as well as the degree to which partners can agree on the underlying values that are baked into a technological standard, is therefore likely to have far-reaching consequences for AI development, as well as interoperability in the years to come.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Rethink reporting of evaluation results in AI

    Rethink reporting of evaluation results in AI

  • People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

    People are not coins: Morally distinct types of predictions necessitate different fairness constrain...

  • Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

    Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?

  • Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Ubuntu’s Implications for Philosophical Ethics

    Ubuntu’s Implications for Philosophical Ethics

  • Warning Signs: The Future of Privacy and Security in an Age of Machine Learning  (Research summary)

    Warning Signs: The Future of Privacy and Security in an Age of Machine Learning (Research summary)

  • Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinfo...

    Research summary: A Picture Paints a Thousand Lies? The Effects and Mechanisms of Multimodal Disinfo...

  • The Meaning of “Explainability Fosters Trust in AI”

    The Meaning of “Explainability Fosters Trust in AI”

  • Common but Different Futures: AI Inequity and Climate Change

    Common but Different Futures: AI Inequity and Climate Change

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.