🔬 Research Summary by Lewis Ho, a researcher on Google DeepMind’s AGI Strategy and Governance Team.
[Original paper by Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, and Duncan Snidal]
Overview: International efforts may be useful for ensuring the potentially transformative impacts of AI are beneficial to humanity. This white paper provides an overview of the kinds of international institutions that could help harness the benefits and manage the risks of advanced AI.
Introduction
The opportunities and challenges posed by advanced AI have spurred lively public discussions about international institutions for AI. For example, the UN Secretary-General has discussed the possibility of an organization inspired by the IAEA (International Atomic Energy Agency), and the present Chair of the UK’s Foundation Model Task Force has previously proposed an AI project modeled after CERN (European Organisation for Nuclear Research).
To understand what kinds of institutions will be most appropriate for AI systems of the future, this paper focuses on the challenges of AI and the institutional functions that could help address them. In particular, it addresses 1) the benefits and risks of advanced AI that require international efforts to manage, 2) the institutional functions that could further such efforts, and 3) the kinds of institutions that could provide such functions.
Key Insights
Harnessing AI for global benefit
The authors argue that international collaborations may be useful for spreading AI benefits globally. Many societies that could benefit from AI may not have the resources, infrastructure, or workforce to make the most of cutting-edge systems. Furthermore, frontier AI development may not focus on global needs, and the economic benefits of commercial AI technologies could primarily benefit developed countries.
The paper lists several institutional functions that could help address this challenge. International efforts to build consensus on AI opportunities could support efforts to distribute and enable access to AI. International collaborations to develop frontier AI may also be beneficial in certain cases.
Managing risks from advanced systems
International efforts may also be useful for managing the risks posed by advanced AI. Without adequate safeguards, they may be misused by malicious actors worldwide with transnational consequences—for example, to engineer cyber- or bioweapons or conduct disinformation campaigns. Systems deployed irresponsibly in high-stakes contexts may also fail unexpectedly, causing accidents with international consequences.
Protocols for responsible development and deployment may be useful for mitigating these accident and misuse risks. International institutions can promote their adoption by building consensus on risks and how they can be mitigated, and setting safety norms and standards. International efforts to conduct or support AI safety research may accelerate the development of safety protocols and their reach.
Furthermore, the significant geopolitical benefits of rapid AI development may disincline states to adequately regulate AI. International institutions may support and incentivize the adoption of standards and even monitor compliance with governance frameworks.
Four kinds of institutions
The authors discuss four complementary institutional models to perform these functions:
- An intergovernmental Commission on Frontier AI could establish a scientific position on the opportunities and risks of advanced AI. In doing so, it would build public consensus on AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.
- An intergovernmental or multi-stakeholder Advanced AI Governance Organization could internationalize and align efforts to address global risks from advanced AI systems by setting governance norms and standards, assisting in their implementation, or performing compliance monitoring functions for a governance regime.
- A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it could help underserved societies benefit from cutting-edge AI technology and promote international access to advanced AI technology for safety and governance objectives.
- An AI Safety Project could bring together leading researchers and engineers to work on technical mitigations of AI risk. It would accelerate AI safety R&D by increasing its scale, resourcing, and coordination.
Uncertainties about the viability of these models
There are important open questions about the viability of such models:
- The lack of existing scientific research on AI opportunities and risks, and the challenging and politically charged subject matter, may hamper the activities of a Commission on Frontier AI.
- It may be difficult for an Advanced AI Governance Organization to set standards that keep up with a quickly changing AI risk landscape, and the many challenges of international coordination raise questions about incentivizing participation in a governance regime.
- The potentially dual-use nature of general-purpose AI technologies might restrict a Frontier AI Collaborative’s ability to provide access to systems, and the significant obstacles to underserved societies making use of AI systems raise questions about its effectiveness as a means of promoting sustainable development.
- An AI Safety Project could struggle to secure adequate model access to conduct safety research, and it may not be worthwhile to divert safety researchers away from frontier labs.
Between the lines
This paper could provide conceptual clarity to the rich discussions of international AI governance, which will become more pressing as AI systems continue to improve. Greater attention to this topic could be warranted: to understand which of these models best match the present opportunities and needs—especially in reference to existing efforts in AI policy; to examine how such models should be designed in practice, and whether their viability issues can be resolved; and to lay the practical groundwork for the institutions that may be necessary to ensure the systems of the future benefit humanity globally.