• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

International Institutions for Advanced AI

August 22, 2023

🔬 Research Summary by Lewis Ho, a researcher on Google DeepMind’s AGI Strategy and Governance Team.

[Original paper by Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, and Duncan Snidal]


Overview: International efforts may be useful for ensuring the potentially transformative impacts of AI are beneficial to humanity. This white paper provides an overview of the kinds of international institutions that could help harness the benefits and manage the risks of advanced AI.


Introduction

The opportunities and challenges posed by advanced AI have spurred lively public discussions about international institutions for AI. For example, the UN Secretary-General has discussed the possibility of an organization inspired by the IAEA (International Atomic Energy Agency), and the present Chair of the UK’s Foundation Model Task Force has previously proposed an AI project modeled after CERN (European Organisation for Nuclear Research).

To understand what kinds of institutions will be most appropriate for AI systems of the future, this paper focuses on the challenges of AI and the institutional functions that could help address them. In particular, it addresses 1) the benefits and risks of advanced AI that require international efforts to manage, 2) the institutional functions that could further such efforts, and 3) the kinds of institutions that could provide such functions.

Key Insights

Harnessing AI for global benefit

The authors argue that international collaborations may be useful for spreading AI benefits globally. Many societies that could benefit from AI may not have the resources, infrastructure, or workforce to make the most of cutting-edge systems. Furthermore, frontier AI development may not focus on global needs, and the economic benefits of commercial AI technologies could primarily benefit developed countries. 

The paper lists several institutional functions that could help address this challenge. International efforts to build consensus on AI opportunities could support efforts to distribute and enable access to AI. International collaborations to develop frontier AI may also be beneficial in certain cases.

Managing risks from advanced systems

International efforts may also be useful for managing the risks posed by advanced AI. Without adequate safeguards, they may be misused by malicious actors worldwide with transnational consequences—for example, to engineer cyber- or bioweapons or conduct disinformation campaigns. Systems deployed irresponsibly in high-stakes contexts may also fail unexpectedly, causing accidents with international consequences.

Protocols for responsible development and deployment may be useful for mitigating these accident and misuse risks. International institutions can promote their adoption by building consensus on risks and how they can be mitigated, and setting safety norms and standards. International efforts to conduct or support AI safety research may accelerate the development of safety protocols and their reach. 

Furthermore, the significant geopolitical benefits of rapid AI development may disincline states to adequately regulate AI. International institutions may support and incentivize the adoption of standards and even monitor compliance with governance frameworks.

Four kinds of institutions

The authors discuss four complementary institutional models to perform these functions: 

  • An intergovernmental Commission on Frontier AI could establish a scientific position on the opportunities and risks of advanced AI. In doing so, it would build public consensus on AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.
  • An intergovernmental or multi-stakeholder Advanced AI Governance Organization could internationalize and align efforts to address global risks from advanced AI systems by setting governance norms and standards, assisting in their implementation, or performing compliance monitoring functions for a governance regime. 
  • A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it could help underserved societies benefit from cutting-edge AI technology and promote international access to advanced AI technology for safety and governance objectives.
  • An AI Safety Project could bring together leading researchers and engineers to work on technical mitigations of AI risk. It would accelerate AI safety R&D by increasing its scale, resourcing, and coordination.

Uncertainties about the viability of these models

There are important open questions about the viability of such models:

  • The lack of existing scientific research on AI opportunities and risks, and the challenging and politically charged subject matter, may hamper the activities of a Commission on Frontier AI. 
  • It may be difficult for an Advanced AI Governance Organization to set standards that keep up with a quickly changing AI risk landscape, and the many challenges of international coordination raise questions about incentivizing participation in a governance regime. 
  • The potentially dual-use nature of general-purpose AI technologies might restrict a Frontier AI Collaborative’s ability to provide access to systems, and the significant obstacles to underserved societies making use of AI systems raise questions about its effectiveness as a means of promoting sustainable development. 
  • An AI Safety Project could struggle to secure adequate model access to conduct safety research, and it may not be worthwhile to divert safety researchers away from frontier labs.

Between the lines

This paper could provide conceptual clarity to the rich discussions of international AI governance, which will become more pressing as AI systems continue to improve. Greater attention to this topic could be warranted: to understand which of these models best match the present opportunities and needs—especially in reference to existing efforts in AI policy; to examine how such models should be designed in practice, and whether their viability issues can be resolved; and to lay the practical groundwork for the institutions that may be necessary to ensure the systems of the future benefit humanity globally.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • Investing in AI for Social Good: An Analysis of European National Strategies

    Investing in AI for Social Good: An Analysis of European National Strategies

  • Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

    Research summary: Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Predictio...

  • Enough With “Human-AI Collaboration”

    Enough With “Human-AI Collaboration”

  • Research summary: SoK: Security and Privacy in Machine Learning

    Research summary: SoK: Security and Privacy in Machine Learning

  • Mapping AI Arguments in Journalism and Communication Studies

    Mapping AI Arguments in Journalism and Communication Studies

  • Governance of artificial intelligence

    Governance of artificial intelligence

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

    Technological trajectories as an outcome of the structure-agency interplay at the national level: In...

  • Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

    Building a Credible Case for Safety: Waymo's Approach for the Determination of Absence of Unreasonab...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.