• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

International Institutions for Advanced AI

August 22, 2023

🔬 Research Summary by Lewis Ho, a researcher on Google DeepMind’s AGI Strategy and Governance Team.

[Original paper by Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, and Duncan Snidal]


Overview: International efforts may be useful for ensuring the potentially transformative impacts of AI are beneficial to humanity. This white paper provides an overview of the kinds of international institutions that could help harness the benefits and manage the risks of advanced AI.


Introduction

The opportunities and challenges posed by advanced AI have spurred lively public discussions about international institutions for AI. For example, the UN Secretary-General has discussed the possibility of an organization inspired by the IAEA (International Atomic Energy Agency), and the present Chair of the UK’s Foundation Model Task Force has previously proposed an AI project modeled after CERN (European Organisation for Nuclear Research).

To understand what kinds of institutions will be most appropriate for AI systems of the future, this paper focuses on the challenges of AI and the institutional functions that could help address them. In particular, it addresses 1) the benefits and risks of advanced AI that require international efforts to manage, 2) the institutional functions that could further such efforts, and 3) the kinds of institutions that could provide such functions.

Key Insights

Harnessing AI for global benefit

The authors argue that international collaborations may be useful for spreading AI benefits globally. Many societies that could benefit from AI may not have the resources, infrastructure, or workforce to make the most of cutting-edge systems. Furthermore, frontier AI development may not focus on global needs, and the economic benefits of commercial AI technologies could primarily benefit developed countries. 

The paper lists several institutional functions that could help address this challenge. International efforts to build consensus on AI opportunities could support efforts to distribute and enable access to AI. International collaborations to develop frontier AI may also be beneficial in certain cases.

Managing risks from advanced systems

International efforts may also be useful for managing the risks posed by advanced AI. Without adequate safeguards, they may be misused by malicious actors worldwide with transnational consequences—for example, to engineer cyber- or bioweapons or conduct disinformation campaigns. Systems deployed irresponsibly in high-stakes contexts may also fail unexpectedly, causing accidents with international consequences.

Protocols for responsible development and deployment may be useful for mitigating these accident and misuse risks. International institutions can promote their adoption by building consensus on risks and how they can be mitigated, and setting safety norms and standards. International efforts to conduct or support AI safety research may accelerate the development of safety protocols and their reach. 

Furthermore, the significant geopolitical benefits of rapid AI development may disincline states to adequately regulate AI. International institutions may support and incentivize the adoption of standards and even monitor compliance with governance frameworks.

Four kinds of institutions

The authors discuss four complementary institutional models to perform these functions: 

  • An intergovernmental Commission on Frontier AI could establish a scientific position on the opportunities and risks of advanced AI. In doing so, it would build public consensus on AI prospects and issues, contribute to a scientifically informed account of AI use and risk mitigation, and be a source of expertise for policymakers.
  • An intergovernmental or multi-stakeholder Advanced AI Governance Organization could internationalize and align efforts to address global risks from advanced AI systems by setting governance norms and standards, assisting in their implementation, or performing compliance monitoring functions for a governance regime. 
  • A Frontier AI Collaborative could promote access to advanced AI as an international public-private partnership. In doing so, it could help underserved societies benefit from cutting-edge AI technology and promote international access to advanced AI technology for safety and governance objectives.
  • An AI Safety Project could bring together leading researchers and engineers to work on technical mitigations of AI risk. It would accelerate AI safety R&D by increasing its scale, resourcing, and coordination.

Uncertainties about the viability of these models

There are important open questions about the viability of such models:

  • The lack of existing scientific research on AI opportunities and risks, and the challenging and politically charged subject matter, may hamper the activities of a Commission on Frontier AI. 
  • It may be difficult for an Advanced AI Governance Organization to set standards that keep up with a quickly changing AI risk landscape, and the many challenges of international coordination raise questions about incentivizing participation in a governance regime. 
  • The potentially dual-use nature of general-purpose AI technologies might restrict a Frontier AI Collaborative’s ability to provide access to systems, and the significant obstacles to underserved societies making use of AI systems raise questions about its effectiveness as a means of promoting sustainable development. 
  • An AI Safety Project could struggle to secure adequate model access to conduct safety research, and it may not be worthwhile to divert safety researchers away from frontier labs.

Between the lines

This paper could provide conceptual clarity to the rich discussions of international AI governance, which will become more pressing as AI systems continue to improve. Greater attention to this topic could be warranted: to understand which of these models best match the present opportunities and needs—especially in reference to existing efforts in AI policy; to examine how such models should be designed in practice, and whether their viability issues can be resolved; and to lay the practical groundwork for the institutions that may be necessary to ensure the systems of the future benefit humanity globally.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

    Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ...

  • Zoom Out and Observe: News Environment Perception for Fake News Detection

    Zoom Out and Observe: News Environment Perception for Fake News Detection

  • Research summary:  The Flight to Safety-Critical AI

    Research summary: The Flight to Safety-Critical AI

  • Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

    Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentati...

  • Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

    Sex Trouble: Sex/Gender Slippage, Sex Confusion, and Sex Obsession in Machine Learning Using Electro...

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

    The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks (Research Summa...

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • Disability, Bias, and AI (Research Summary)

    Disability, Bias, and AI (Research Summary)

  • Now I’m Seen: An AI Ethics Discussion Across the Globe

    Now I’m Seen: An AI Ethics Discussion Across the Globe

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.