🔬 Research Summary by Diptish Dey, Ph.D. and Debarati Bhaumik, Ph.D.
Diptish Dey teaches and conducts research in responsible AI at the Faculty of Business & Economics of the Amsterdam University of Applied Sciences, focusing on the auditability and explainability of AI systems.
Debarati Bhaumik, is a lecturer and researcher at the Amsterdam University of Applied Sciences working on developing methods to audit AI systems and explainability of recommender systems.
[Original paper by Diptish Dey and Debarati Bhaumik]
Overview: This paper reflects organizations’ struggle to comply with upcoming AI regulations. Drawing insights from primary and secondary research, it proposes a governance model that enables the auditability of complete AI systems, thereby enabling transparency, explainability, and compliance with regulations.
AI, through its applications, is steadily enveloping humans and their environment. These applications are often envisaged by organizations aiming to create value in varied ways, among others, through innovative products and services. That technology leads to undesirable effects is well-researched. Yet, this time, it is maximi momenti: the rudimentary classification arrangements in data-driven AI systems fringe some solemn and historically hard-won fundamental rights, most prominently discrimination. It resulted in a vigilant European Commission proposing the Artificial Intelligence Act (AIA) to safeguard its citizens’ fundamental rights. Legislations (regulations) without proper enforcement mechanisms are heterogeneous in their impact, which is noticeable with the roll-out of GDPR. Attempting to comprehend how organizations (intend to) comply with the AIA, we initiated primary research in the Netherlands through a survey and validation interviews.
Learnings from GDPR
The GDPR precedes the AIA, and there are similarities in their ambitions, among other fundamental rights. Complying with ISO 27001:2013 assists organizations to a major extent in pursuing GDPR compliance. Whereas the former is precise and action-minded, the latter is not and is very open to interpretation. The impact is also heterogeneous, with smaller providers being affected more than larger ones. A major difference with the AIA is the complexity of the AIA regime compared to that of the GDPR. The black box behavior of stacked-up algorithms and systems within a single value proposition creates ample opportunity for providers to comply creatively. Furthermore, an increasingly DIY nature of AI development has a cumulative effect on the lack of compliance.
Size matters yet again
We conducted surveys in the Netherlands to analyze the extent to which organizations currently comply with the upcoming AIA. The survey, which was conducted in 2023 and included more than 30 organizations, provided unique insights into their (lack of) compliance and the priority they invested in eventually becoming compliant with the AIA. We studied the moderating role that variables such as size and level of outsourcing, among others, play in an organization’s level of AIA compliance. Subsequently, we conducted validation meetings with several survey participants to identify root causes. We obtained valuable insights into process and resource challenges that organizations face. Almost all organizations exaggerated their actual level of compliance. Among the many hypotheses tested, the relationship between organization size and level of compliance was obtrusive: smaller organizations were less compliant. Also, smaller organizations indicated a lower priority to complying with the AIA. Are we re-experiencing heterogeneous effects as in GDPR? How can we better enforce the AIA?
Importance of audit in AI governance
Enforcement mechanisms are essential for legislative/regulatory success. From an enforcement perspective, the AIA resembles a ‘command and control’ strategy, in which creative compliance is largely prevented through a balance between deterrence and audit. Auditing would greatly improve the chances of discovering non-compliant AI systems. However, auditing AI technologies only, as opposed to complete systems, would be insufficient due to the former’s increasingly complex and non-transparent behavior. More importantly, the question is, how do we govern the development of AI systems in organizations? Which considerations do we need to make in the design of a governance model? Which processes must accompany the implementation of such a model? Do we need to create new functions in charge of these processes? To what extent is such a governance model auditable against pre-defined KPIs? What is the economic cost of implementing such a governance model? These are all open questions today, requiring research and generating stimulating insights.
The APPRAISE framework
We propose an AI governance framework, APPRAISE, which results from primary research insights and argumentation from secondary sources. The latter contributed towards recognizing and analyzing four pressures that organizations embracing AI for product/service innovation encounter: technology, value-creation, regulatory, and normative. Strategic dilemmas such as build versus buy and exploration versus exploitation also influenced our thinking when developing APPRAISE.
Between the lines
In our journey from research to model development, we came across many minor insights and some eye-openers. Combining these, we can make some conclusions at an aggregated level:
- Organizations realize or understand too little what it takes to comply with the AIA. Creative compliance is noticeable in their actions. The scope of compliance actions they undertake to be compliant is also limited, especially in breadth.
- The consequences of strategic decisions, such as outsourcing, offshoring, etc., on AIA compliance are often underestimated. Organizations tend to be driven by value creation and technology pressures when making these decisions, often underestimating the effect of regulation.
Looking forward, our findings need to be replicated by other studies. Many avenues need in-depth understanding. For example, how organization capital creates normative pressures on AIA compliance needs considerable research. From a governance perspective, which options exist at what economic cost and how they stifle innovation need research, for AI innovation must prevail for the benefit of humanity.