• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

The importance of audit in AI governance

January 27, 2024

šŸ”¬ Research Summary by Diptish Dey, Ph.D. and Debarati Bhaumik, Ph.D.

Diptish Dey teaches and conducts research in responsible AI at the Faculty of Business & Economics of the Amsterdam University of Applied Sciences, focusing on the auditability and explainability of AI systems.Ā Ā 

Debarati Bhaumik, is a lecturer and researcher at the Amsterdam University of Applied Sciences working on developing methods to audit AI systems and explainability of recommender systems.

[Original paper by Diptish Dey and Debarati Bhaumik]


Overview: This paper reflects organizations’ struggle to comply with upcoming AI regulations. Drawing insights from primary and secondary research, it proposes a governance model that enables the auditability of complete AI systems, thereby enabling transparency, explainability, and compliance with regulations.


Introduction

AI, through its applications, is steadily enveloping humans and their environment. These applications are often envisaged by organizations aiming to create value in varied ways, among others, through innovative products and services. That technology leads to undesirable effects is well-researched. Yet, this time, it is maximi momenti: the rudimentary classification arrangements in data-driven AI systems fringe some solemn and historically hard-won fundamental rights, most prominently discrimination. It resulted in a vigilant European Commission proposing the Artificial Intelligence Act (AIA) to safeguard its citizens’ fundamental rights. Legislations (regulations) without proper enforcement mechanisms are heterogeneous in their impact, which is noticeable with the roll-out of GDPR. Attempting to comprehend how organizations (intend to) comply with the AIA, we initiated primary research in the Netherlands through a survey and validation interviews.

Key Insights

Learnings from GDPR

The GDPR precedes the AIA, and there are similarities in their ambitions, among other fundamental rights. Complying with ISO 27001:2013 assists organizations to a major extent in pursuing GDPR compliance. Whereas the former is precise and action-minded, the latter is not and is very open to interpretation. The impact is also heterogeneous, with smaller providers being affected more than larger ones. A major difference with the AIA is the complexity of the AIA regime compared to that of the GDPR. The black box behavior of stacked-up algorithms and systems within a single value proposition creates ample opportunity for providers to comply creatively. Furthermore, an increasingly DIY nature of AI development has a cumulative effect on the lack of compliance.

Size matters yet again

We conducted surveys in the Netherlands to analyze the extent to which organizations currently comply with the upcoming AIA. The survey, which was conducted in 2023 and included more than 30 organizations, provided unique insights into their (lack of) compliance and the priority they invested in eventually becoming compliant with the AIA. We studied the moderating role that variables such as size and level of outsourcing, among others, play in an organization’s level of AIA compliance. Subsequently, we conducted validation meetings with several survey participants to identify root causes. We obtained valuable insights into process and resource challenges that organizations face. Almost all organizations exaggerated their actual level of compliance. Among the many hypotheses tested, the relationship between organization size and level of compliance was obtrusive: smaller organizations were less compliant. Also, smaller organizations indicated a lower priority to complying with the AIA. Are we re-experiencing heterogeneous effects as in GDPR? How can we better enforce the AIA?

Importance of audit in AI governance

Enforcement mechanisms are essential for legislative/regulatory success. From an enforcement perspective, the AIA resembles a ā€˜command and control’ strategy, in which creative compliance is largely prevented through a balance between deterrence and audit. Auditing would greatly improve the chances of discovering non-compliant AI systems. However, auditing AI technologies only, as opposed to complete systems, would be insufficient due to the former’s increasingly complex and non-transparent behavior. More importantly, the question is, how do we govern the development of AI systems in organizations? Which considerations do we need to make in the design of a governance model? Which processes must accompany the implementation of such a model? Do we need to create new functions in charge of these processes? To what extent is such a governance model auditable against pre-defined KPIs? What is the economic cost of implementing such a governance model? These are all open questions today, requiring research and generating stimulating insights.

The APPRAISE framework

We propose an AI governance framework, APPRAISE, which results from primary research insights and argumentation from secondary sources. The latter contributed towards recognizing and analyzing four pressures that organizations embracing AI for product/service innovation encounter: technology, value-creation, regulatory, and normative. Strategic dilemmas such as build versus buy and exploration versus exploitation also influenced our thinking when developing APPRAISE.

Between the lines 

In our journey from research to model development, we came across many minor insights and some eye-openers. Combining these, we can make some conclusions at an aggregated level:

  1. Organizations realize or understand too little what it takes to comply with the AIA. Creative compliance is noticeable in their actions. The scope of compliance actions they undertake to be compliant is also limited, especially in breadth.
  2. The consequences of strategic decisions, such as outsourcing, offshoring, etc., on AIA compliance are often underestimated. Organizations tend to be driven by value creation and technology pressures when making these decisions, often underestimating the effect of regulation.  

Looking forward, our findings need to be replicated by other studies. Many avenues need in-depth understanding. For example, how organization capital creates normative pressures on AIA compliance needs considerable research. From a governance perspective, which options exist at what economic cost and how they stifle innovation need research, for AI innovation must prevail for the benefit of humanity.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

related posts

  • Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

    Predatory Medicine: Exploring and Measuring the Vulnerability of Medical AI to Predatory Science

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • The Most Important Question in AI Alignment

    The Most Important Question in AI Alignment

  • Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

    Research summary: Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Le...

  • Measuring Value Understanding in Language Models through Discriminator-Critique Gap

    Measuring Value Understanding in Language Models through Discriminator-Critique Gap

  • Data Pooling in Capital Markets and its Implications

    Data Pooling in Capital Markets and its Implications

  • Embedded ethics: a proposal for integrating ethics into the development of medical AI

    Embedded ethics: a proposal for integrating ethics into the development of medical AI

  • Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

    Tell me, what are you most afraid of? Exploring the Effects of Agent Representation on Information D...

  • Responsible Use of Technology: The IBM Case Study

    Responsible Use of Technology: The IBM Case Study

  • System Safety and Artificial Intelligence

    System Safety and Artificial Intelligence

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • Ā© 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.