• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

April 28, 2025

✍️ By Alexander Wilhelm.

Alexander is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance.


Discussions between governments, civil society, and companies on the ‘safe’ development of AI have advanced through collaborations such as the AI Safety Summit 2023 held in the UK and the AI Seoul Summit 2024. Led by the United Kingdom and the Republic of South Korea, the Seoul Summit resulted in a framework of commitments, known as the Frontier AI Safety Commitments, which 20 organizations, including Anthropic, Microsoft, NVIDIA, and OpenAI, have agreed to. These commitments required signatories to publish “a safety framework focused on severe risks” at the AI Summit in France in February 2025 (See The AI Ethics Brief #158 for more on the Paris AI Action Summit). However, rhetoric at the Paris Summit emphasized the benefits of AI rather than its potential harms and risks, raising questions about the future of the three goals outlined in the Frontier AI Safety Commitments.

Three outcomes of the Frontier AI Safety Commitments

Outcome 1: Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems.

  • Signatories to the Commitments agree to identify risks relevant to their frontier models, including risks detected by external entities and governments. Frontier models are defined within the Commitments as “highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced models.” Multiple stakeholders are expected to collaboratively identify unacceptable levels of risk within frontier models, with justifications for the boundaries once they are set. Risk mitigation should then be planned to maintain the acceptable levels, with a commitment not to develop models that fail to meet these standards. 

Outcome 2: Organisations are accountable for safely developing and deploying their frontier AI models and systems.

  • Groups that voluntarily pledge to join the Frontier AI Safety Commitments must update their policies on an ongoing basis, extending the viability of the agreement as these technologies evolve.

Outcome 3: Organisations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments.

  • Signatories are expected to provide transparency to the public except when “doing so would increase risk or divulge sensitive commercial information to a degree disproportionate to the societal benefit.” The caveat, however, is that more details should be provided to “trusted actors,” such as a home government. Finally, external actors should be engaged in the assessment of risk, the organization’s internal plans to safely develop frontier AI models, and their follow through in implementing these plans.

Recent Developments in Frontier AI Governance

The Frontier AI Safety Commitments provide a framework to mitigate risks to safety, security, and transparency, while discussing governance strategies such as disclosure, evaluation, and performance requirements. While not all AI development organizations have signed the Frontier AI Safety Commitments, consensus on frontier AI standards is developing, as the Frontier AI Safety Commitments are reflected in China’s AI Safety Commitments. 

Nonetheless, some experts remain concerned about the voluntary nature of these commitments. The Paris AI Summit’s focus on the promise and opportunity of AI instead of the risks latent in frontier models led to disappointment for some civil society groups. The voluntary commitments remain for the 20 signatories to the Frontier AI Safety Commitments, but the future of such standards is an open question as the focus of AI Summits shifts.

Further Reading

  1. The AI Seoul Summit 2024
  2. Tech Giants Pledge AI Safety Commitments — Including a ‘Kill Switch’ if They Can’t Mitigate Risks
  3. The Bletchley Park Process Could be a Building Block for Global Cooperation on AI Safety

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Supporting Human-LLM collaboration in Auditing LLMs with LLMs

    Supporting Human-LLM collaboration in Auditing LLMs with LLMs

  • Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

    Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarl...

  • The Return on Investment in AI Ethics: A Holistic Framework

    The Return on Investment in AI Ethics: A Holistic Framework

  • Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

    Bridging Systems: Open Problems for Countering Destructive Divisiveness Across Ranking, Recommenders...

  • A Beginner’s Guide for AI Ethics

    A Beginner’s Guide for AI Ethics

  • The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

    The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms (Research Summary)

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

  • We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

    We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

  • On Measuring Fairness in Generative Modelling (NeurIPS 2023)

    On Measuring Fairness in Generative Modelling (NeurIPS 2023)

  • The Two Faces of AI in Green Mobile Computing: A Literature Review

    The Two Faces of AI in Green Mobile Computing: A Literature Review

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.