🔬 Research summary by Angshuman Kaushik, Researcher in AI Policy, Governance and Ethics.
[Original paper by Charlotte Stix]
Overview: To implement governance efforts for artificial intelligence (AI), new institutions require to be established, both at a national and an international level. This paper outlines a scheme of such institutions and conducts an in-depth investigation of three key components of any future AI governance institution, exploring benefits and associated drawbacks. Thereafter, the paper highlights significant aspects of various institutional roles specifically around questions of institutional purpose, and frames what these could look like in practice, by placing these debates in a European context and proposing different iterations of a European AI Agency. Finally, conclusions and future research directions are proposed.
The paper begins by drawing the attention of the readers to the fact that the governments around the world have begun to approach the governance of AI through multiple controls. One example being the European Union’s recent Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (“Artificial Intelligence Act”) which puts forward a regulatory framework for high-risk AI systems and the other being the Trade and Technology Council co-established by the US and the EU with the mandate to cooperate on the development of suitable standards for AI. Further, as the field of AI governance is relatively new, as such, there exist only a few specialist governmental institutions exclusively dedicated in the area. According to the author, to properly develop, support and implement new AI governance efforts, it is likely that a number of new institutions will need to be established in the future. There are broadly two types of institutions that one could investigate: those that exist and may be adapted and those that do not exist yet but will eventually come into existence to fill the void created by new governance initiatives. This paper puts emphasis on the latter type of institutions, with particular focus on institutions set up by governments. In order to proceed with its objective, the paper builds on recent academic calls for an international governance coordinating committee for AI, for an international regulatory agency for AI etc., and draws on existing scholarship in the area, and addresses itself to those individuals who will be involved in setting up new institutions and those who are interested in conducting further research on pragmatic institution building for AI governance.
Building new AI governance institutions
The paper states that AI-specific governance institutions working on soft governance mechanisms with non-binding rules have already come into existence. A few examples are OECD, G7, and the Global Partnership on AI etc. However, there has been mounting pressure to develop and implement stronger and more binding AI governance mechanisms than those covered by ethical principles. As countries move towards harder governance efforts, they are likely to require increasingly specialized institutions to oversee their implementation. Moreover, as AI governance efforts soar and more coordination, action and policy proposals become necessary within a nation as well as at an international level, it is likely that there will be a need for more specialized governmental agencies to handle an increasingly diverse set of tasks on top of the existing work. It might be overall quicker, cheaper and more effective to build a new institution from scratch that is ‘fit for purpose’ rather than exert time, effort and political goodwill to change the structure of an existing institution. The author then, puts forward a selection of axes that need to be considered in building new AI governance institutions, namely, purpose, geography and capacity, with particular emphasis on purpose.
The first question that needs to be answered is the purposes of the new institution i.e., what is it meant to do? Under the broad heading of purpose, the paper introduces the outline of four different roles an institution for AI governance could take. The roles are namely, coordinator, analyzer, developer and investigator.
The coordinator institution
The task of a coordinator institution could, for instance, include working with the rising number of ethical guidelines and attempt to operationalize them more clearly. It could also serve as an umbrella organization and coordinate activities amongst different groups. Some examples of coordinator institutions are the UN, the G20, and the NATO etc. The paper goes on to highlight the fact that the actions of the coordinator institution shall be timely and appropriate and proposes that a future AI Agency in the EU might take up the role of a coordinator institution.
The analyzer institution
The duties of an analyzer institution could be varied, such as, mapping existing efforts and identifying gaps across various governments (for instance the European Commission), compiling data sets and information on the technical landscape and sketching technological trajectories (for instance the AI Index) etc. The role of an analyzer institution is more active than that of a coordinator institution, in that it interferes more directly with the governance or policy making process by way of providing crucial information that can inform and shape those decision-making processes.
The developer institution
A developer institution shall provide either directly actionable measure or formulate new policy solutions to existing issues. It may take up the role of examining blind spots and proposing solutions for those by way of its own initiative, in addition to work it might be asked to undertake by various government agencies.
The investigator institution
It is envisioned as a ‘watchdog’ assigned with the task to investigate whether or not actors such as governments, companies or specific organizations are adhering to the relevant standards, procedures, laws or not. One example of an investigator institution is the Human Rights Council. The most important requirement of such an institution would be its independence and impartiality.
The effects of AI systems transcend geographies and are not confined within national borders. Therefore, many AI governance issues could be seen as multi-country concerns. The couple of broad considerations with respect to geography that the paper delves into are: what is the benefit or downside of a new multi-country institution and how does it fare in comparison to nationally ‘restricted’ institutions? A multi-country institution must consider questions of access, inclusion and participation. One model proposed is if several nations expect that their position towards AI governance is broadly more beneficial than that of other nations, it may be reasonable for them to cooperate and coordinate to establish a dedicated institution. Conversely, if nations choose not to form a new institution, a proliferation of similar but distinct institutions could affect fragmentation of global AI governance regimes.
The third axis is capacity which relates to the previous two axes i.e., purpose and geography. It concerns with what the institution needs in terms of capacities for it to thrive, both on the technical and non-technical side. The paper proposes that access to technical infrastructure could play an important role for future AI governance institutions. The said technical infrastructure may include access to compute, available datasets, testing and experimentation facilities etc. It can minimize bottlenecks in terms of information exchange and increase speed between what is to be governed and the associated governance actions and decisions, thereby contributing to more agility, specificity and foresight in policy making for AI. On the non-technical side, the paper underscores the need to build up human capacity which could broadly take two forms: (a) out of the house capacity with either (1) a network of individual experts to draw upon when needed, or (2) expert groups and external panels and (b) in-house capacity with a team having a diverse background with relevant experience in technical, legal and ethical areas.
Between the lines
The rapid and unbridled increase in the use of AI systems has necessitated its effective governance. In fact to govern efficiently, the need of the hour is institutions that can deliver. This paper, instead of making normative assessments of the various institutional setups, charts out a pragmatic approach in building up institutions for AI governance at a time when, proposals for setting up such institutions are gathering steam. More importantly, it provides a framework to start with. Another highlight of the paper is that it shows the way for future research direction on the topic.