🔬 Research summary by Benjamin Cedric Larsen, a PhD Fellow at Copenhagen Business School researching questions related to AI ethics and compliance.
[Original paper by Mark McFadden, Kate Jones, Emily Taylor, Georgia Osborn]
Overview: The EU’s AI Act has envisioned a strong role for technical standards in the governance of AI. Little research has been conducted on what this role looks like, however. This paper by Oxford Information Labs provides an in-depth analysis of the world of technical standards and gives a high-level overview of the EU AI Act’s expected reliance on standards, as well as their perceived strengths and weaknesses in the governance of AI.
Introduction
The EU’s AI Act is a far-reaching attempt to provide a regulatory foundation for the safe, fair, and innovative development of Artificial Intelligence in the European Union. An important feature of the AI Act is standardization as a tool of AI governance. Since many AI standards remain nascent, however, few details on how the EU intends to govern through the use of AI standards have been clarified.
Until now, the EU AI ACT has categorized three levels of risk associated with AI systems across (i) unacceptable risk, (ii) high risk, and (iii) low risk. These categories determine the regulatory consequences for the individual AI systems.
Before the AI ACT is intended to go into effect in 2023, however, harmonized standards as well as supporting guidance and compliance tools need to be developed in order to assist providers and users in complying with the requirements laid out in the AI Act. Conformance with technical standards and common specifications is intended to ensure that providers of high-risk AI stay compliant with the mandatory requirements of the AI Act.
The EU’s AI Act and the case for AI Standards
Technical standards were originally devised to assure safety, quality, and interoperability. For industry players that seek to operate globally, international standards are preferred over national or regional standards because they create a level playing field in markets throughout the world. Older standards focused historically on interoperability as the key benefit to society. Today, many contemporary standards also have social, economic, and political intentions or effects, which influences contemporary norms.
The EU’s new legislative framework (NLF) entails a partnership between legislation and standards, where Article 40 of the AI Act, stipulates that the requirements laid down by the Act, can be covered by complying with officially adopted “harmonized standards”. The European Commission has, in the meantime, mandated that European Standardization Organizations (ESOs) are expected to prepare standards that meet the requirements of forthcoming European legislation, as set out in the AI Act. Once adopted, harmonized standards will be approved by the Commission and then published in the Official Journal of the European Union, which industrial participants then can choose to adopt.
Harmonized standards are viewed as an important building block of the European single market since they complement the requirements of EU legislation with technical requirements for manufacturers that provide products on European common market. The resulting standards are voluntary but tend to be widely adopted because compliance allows suppliers to self-declare that they meet the legal requirements of the legislation and therefore have the right to supply goods to the EU market.
Obstacles to Standardization
While industrial participants, through public consultation, have provided positive feedback to the European approach of the new legislative framework i.e., regulation supported by voluntary harmonized standards, several concerns about the standard-driven approach to AI governance, are highlighted in the paper. These refer to:
Speed and timeliness: the speed and timeliness by which AI standards can be implemented are viewed as an obstacle to their efficiency in terms of governing the more rapid diffusion of AI technologies. Concerns include significant delays in the system, especially in the interval between the conclusion of work on a standard by an ESO and its publication in the Official Journal. In some cases, this means that European standards could lag behind international standards.
Over-reliance on international standards: could cause concern, as there is no guarantee that they comply with EU rights and values. The AI Act, for example, recognizes that the establishment of common normative standards for all high-risk AI systems should be consistent with the Charter of Fundamental Rights, which may not be acknowledged in other regions.
Wide field of risks: standardization surrounding AI technologies and systems have to address a much wider field of risks than other more generic products and systems. AI applications are typically embedded within complex (social) systems, which makes it difficult for the creator of an AI application to predict all of its use-cases as well as how the system potentially could affect Fundamental Rights.Compliance: the current work that is being done on standards, does not place enough focus on adjacent compliance tools for assessing AI products and services against the approved European Standards. Greater focus therefore needs to be placed on the compliance side, as well as what kind of tools, other than standards, that AI users and producers can utilize in order to stay compliant.
Recommendations on the way forward
The paper ends by proposing seven recommendations on the way forward. These include:
(1) Developing a mechanism that addresses the gap between European Standard Organizations’ available resources and their ongoing ability to develop AI standards.
(2) A mechanism that ensures broad participation and a focus on human rights in the standards-setting process.
(3) Devising a fast-track process that aims to improve the timeliness in adoption of AI standards.
(4) Improved education and training for non-expert AI stakeholders.
(5) The development of additional compliance tools.
(6) Â Better mechanisms to balance European standards that embed European rights and values, as well as continued cooperation with international Standard Developing Organizations that ensures global, open, and interoperable AI standards.
(7) Â Inclusion of SMEs in the standard-setting process as well as a minimization of the general costs of engagement.
Between the lines
Technical standards were originally devised to assure safety, quality, and interoperability. For industry players that seek to operate globally, international standards are preferred over national or regional standards because they create a level playing field in markets throughout the world. Older standards focused historically on interoperability as the key benefit to society. Today, many contemporary standards also have social, economic, and political intentions or effects, which often has an influence on contemporary norms. This induces friction between disparate approaches to standard-setting, as well as to the fundamental values that are being baked into new technologies via their standards.
The current politicization of technological standards could therefore have implications for continued measures of interoperability. China’s Standards 2035 strategy, for example, articulates the interplay between domestic standards and the benefits to international trade arising from the legitimacy that international adoption of standards can bring. This means that engineers and scientists who have historically engaged in standardization as a technical process, now may find themselves engaged not only in the commercial process of standard-setting, but also in the underlying strategic geopolitical and societal impacts of new technical standards. How AI standards continue to be developed internationally, as well as the degree to which partners can agree on the underlying values that are baked into a technological standard, is therefore likely to have far-reaching consequences for AI development, as well as interoperability in the years to come.