• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

November 10, 2025

✍️By Tejasvi Nallagundla.

Tejasvi is an Undergraduate Student in Computer Science, Artificial Intelligence and Global Studies and an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece compares OpenAI’s Preparedness Framework (Version 2. Last updated: 15th April, 2025) and Anthropic’s Responsible Scaling Policy (Version 2.2. Effective May 14, 2025), highlighting how each lab approaches transparency as part of their governance processes.

Photo credit: Yasmin Dwiputri & Data Hazards Project
https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/


As AI systems and models continue to develop and advance, the question of how major AI labs communicate and share their safety decisions has become increasingly important, just as important as the decisions themselves. OpenAI’s Preparedness Framework and Anthropic’s Responsible Scaling Policy both dedicate a section of their documents to this very question of transparency, though comparing them reveals differences not only in language but also in approach. Looking at them side by side shows that transparency is shifting from being just about communication to becoming part of governance in its own right.

1. The Purpose of Transparency 

OpenAI starts off their subsection “Transparency and external participation” within the Building Trust section by highlighting an emphasis on public disclosures: “We will release information about our Preparedness Framework results in order to facilitate public awareness of the state of frontier AI capabilities for major deployments.”

Anthropic, in their subsection “Transparency and External Input” within the broader Governance and Transparency section, starts off with a broader motivation: “To advance the public dialogue on the regulation of frontier AI model risks and to enable examination of our actions, we commit to the following,” before moving into more specific points such as public disclosures and other commitments.

The language used by OpenAI in their Preparedness Framework focuses on sharing information about model capabilities and decisions around their deployment to, in their words, “facilitate public awareness.” On the other hand, Anthropic’s language in their Responsible Scaling Policy connects transparency more directly to advancing “public dialogue” and enabling “examination” of their actions.

While both underscore the importance of transparency, OpenAI’s phrasing focuses more on awareness and communication, while Anthropic’s leans toward dialogue and engagement.

2. External Input and Participation 

Both labs also go on to extend their respective ideas of transparency into how external input and evaluation are built into their processes.

OpenAI mentions that, when “a deployment warrants deeper testing” based on their evaluations, they will work with third parties to “independently evaluate models […] when available and feasible.” They extend similar logic to safeguards as well, stating that, when “a deployment warrants third-party stress testing of safeguards and if high-quality third-party testing is available, [OpenAI] may seek this out in particular for models that are over a High capability threshold.”

They also note that their Safety Advisory Group (SAG), “an internal, cross-functional group of OpenAI leaders,” may “opt to get independent expert opinion on the evidence being produced to SAG,” and that “these opinions will form part of the analysis presented to SAG in making its decision on the safety of a deployment.”

Anthropic integrates external input more formally. They state that the company will “solicit input from external experts in relevant domains in the process of developing and conducting capability and safeguards assessments,” and that they “may also solicit external expert input prior to making final decisions on the capability and safeguards assessments.” In addition, Anthropic commits that “on approximately an annual basis, we will commission a third-party review that assesses whether we adhered to this policy’s main procedural commitments.” They also note that they will “notify a relevant U.S. Government entity if a model requires stronger protections than the ASL-2 Standard.”

Thus, although both labs place an emphasis on bringing in outside perspectives, there is a difference in how they frame it, with OpenAI’s approach leaning toward being more discretionary, while Anthropic’s is more institutionalized.

3. Final Thoughts

While this comparison only captures one part of a much larger picture, it underscores how the idea of transparency itself is changing, reflecting not just how institutions share information, but how transparency is increasingly tied to the way safety decisions are made and justified. Ultimately, these differences in how AI labs define and operationalize transparency shape how accountability is built into governance itself.

Further Reading 

  • Chartered Governance Institute UK & Ireland: “From OpenAI to Anthropic: who’s leading on AI governance?”
  • National Institute of Standards and Technology (NIST) : “AI Risk Management  Framework” 
  • Department for Science, Innovation & Technology (UK): “Frontier AI Safety Commitments, AI Seoul Summit 2024”

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Can Chatbots Replace Human Mental Health Support?

    Can Chatbots Replace Human Mental Health Support?

  • AI Policy Corner: New York City Local Law 144

    AI Policy Corner: New York City Local Law 144

  • AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

    AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • AI Chatbots: The Future of Socialization

    AI Chatbots: The Future of Socialization

  • We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

    We interviewed 3 experts who teach Tech Ethics. Here's what we learned.

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

  • AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

    AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.