• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: AI for Good Summit 2025

August 5, 2025

✍️ By Alexander Wilhelm.

Alexander is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights the AI for Good Global Summit 2025, which the author attended.


AI for Good Summit 2025

A variety of stakeholders gathered in Geneva from July 8 to 11 for the International Telecommunication Union’s (ITU) AI for Good Global Summit 2025. The over 11,000 participants discussed topics including AI governance, global health, standard setting, and environmental impacts, engaged with AI art in multiple mediums, experienced new uses for AI in the exhibit areas, and heard from a variety of speakers and perspectives on the TED Talk-esque stages.

A message sent on behalf of Pope Leo XIV encouraged participants “to seek ethical clarity and to establish a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person,” while artist and ITU Goodwill Ambassador will.i.am discussed the role of AI in creative pursuits and data practices.

Several United Nations (UN) agencies released reports, announced new standards databases, and launched their AI tools throughout the Summit. In this dizzying array of AI applications, this week’s AI Policy Corner focuses on several UN-announced AI developments.

AI Standards Exchange Database

Several panels and speakers noted the challenge of uniting standards across various AI actors. To alleviate these logistical problems, the AI Standards Exchange Database was launched during the Summit. More than just listing over 700 standards, the database provides curation tools to search by industry sector, issuing body, type of AI use, and the stage of development for the standard. This is a step toward a more unified approach to standard setting within and across industries and actors.

Publication of the United Nations AI Activities Report

The ITU published the United Nations Activities on Artificial Intelligence (AI) 2024 report during the Summit. This report identifies 729 AI projects across 53 UN entities. Multistakeholderism is highlighted in the report, which notes that 45% of the projects involve some sort of collaboration, reflecting the broad participation seen in the Summit panels.

The 500-page document breaks the data down by UN agency or body. Importantly, the executive summary calls for more work on several Sustainable Development Goals (SDGs), including “clean water and sanitation” (SDG 6) and “responsible consumption and production” (SDG 12), while noting the work being done in areas like “reducing inequality” (SDG 10) and “decent work and economic growth” (SDG 8).

UNICC AI Hub Launch

On the first day of the Summit, the United Nations International Computing Centre (UNICC) announced the launch of their AI Hub. The AI Hub is designed to assist other UN agencies in the development of AI solutions through tools like the AI Academy, which trains UN employees in the use of AI.

Additionally, the sandbox tools within the AI Hub allow for the testing of generative AI models that both the UN and public sector partners can use. Both a technical track for code-oriented users and a drag-and-drop interface for those with less coding experience are available in the sandbox. While aiding in AI-driven problem solving throughout the UN network, the sandbox “ensur[es] compliance with data protection and organizational policies.”

Going Forward

AI for good remains the goal for the ITU as they look ahead to the 2026 Summit. The multistakeholder representation at the Summit and the variety of approaches to AI governance highlight the multi-faceted impacts of AI. In this environment, UN agencies are actively grappling with the potential benefits, as well as the risks and harms of AI tools as they endeavor to use ‘AI for good’ to meet the Sustainable Development Goals by 2030.

Further Reading

  1. July 21, 2025. “AI Standards Exchange Database Welcomes Contributions.” ITU
  2. July 11, 2025. “WHO, ITU, WIPO Showcase a New Report on AI Use in Traditional Medicine.” WHO
  3. Poidevin, Olivia Le. July 11, 2025. “UN Report Urges Stronger Measures to Detect AI-Driven Deepfakes.” Reuters

Photo credit: @Simprints on X (July 30, 2025)

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: AI for Good Summit 2025

AI Policy Corner: Japan’s AI Promotion Act

AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

What is Sovereign Artificial Intelligence?

AI Policy Corner: The Kenya National AI Strategy

related posts

  • The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments

    The Evolution of the Draft European Union AI Act after the European Parliament’s Amendments

  • Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

    Discover Weekly: How the Music Platform Spotify Collects and Uses Your Data

  • Computer vision, surveillance, and social control

    Computer vision, surveillance, and social control

  • Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

    Diagnosing Gender Bias In Image Recognition Systems (Research Summary)

  • The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

    The Ethical AI Startup Ecosystem 04: Targeted AI Solutions and Technologies

  • Fusing Art and Engineering for a more Humane Tech Future

    Fusing Art and Engineering for a more Humane Tech Future

  • Global AI Ethics: Examples, Directory, and a Call to Action

    Global AI Ethics: Examples, Directory, and a Call to Action

  • AI Policy Corner: New York City Local Law 144

    AI Policy Corner: New York City Local Law 144

  • From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

    From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.