• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Discussing the White House’s 2025 AI Action Plan

October 13, 2025

✍️By Matthew Catani.

Matthew is an undergraduate student in Artificial Intelligence and a Research Assistant at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyzes the White House’s 2025 AI Action Plan.


America’s AI Action Plan

On July 23, 2025, the White House released its AI Action Plan for 2025, following President Trump’s executive order in January mandating the development of a proposal to “sustain and enhance America’s global AI dominance”. The document establishes an array of goals the administration hopes to achieve, along with recommended policy actions for each. These goals are organized under three pillars: Accelerate AI innovation, Build American AI infrastructure, and Lead in international AI diplomacy and security. This article will discuss the major elements of each, and what this document means for the future of AI in America as a whole.

Pillar I: Accelerate AI Innovation

This section mainly deals with deregulation of AI systems and development, as well as supporting the expansion and adoption of such systems. Beginning with the former, the plan advises the revising or repealing of laws or other official material that hinders AI development, per Executive Order 14192 “Unleashing Prosperity Through Deregulation”. It also encourages revising the National Institute of Standards and Technology (NIST) AI Risk Management Framework to remove references to concepts such as misinformation, DEI (diversity, equity, and inclusion), and climate change. On the topic of support, the document pushes for investment in improving AI datasets and evaluation, and encourages the adoption of AI in business and government, with special importance being given to the Department of Defense.

Pillar II: Build American AI Infrastructure

This section is focused less on the AI systems themselves, and more on the material and resources needed to power the industry. This involves reducing environmental regulations that might impede the construction of data centers and energy infrastructure, such as the Clean Air Act, the Clean Water Act, and other related laws. It also encourages improving semiconductor production through the Department of Commerce, and training the workforce on AI usage and infrastructure through the Department of Labor (DoL). These DoL programs are to range from educating current workers to training middle and high school students through pre-apprenticeship programs. The section further advises securing America’s critical AI infrastructure through cybersecurity improvement and policy development.

Pillar III: Lead in International AI Diplomacy and Security

While the previous sections deal with AI policy in the United States, this section is concerned with America’s role in the growth and use of AI in the international arena. Much of this is dedicated to controlling the export of vital AI systems and enablers (such as semiconductors). It encourages providing such resources to U.S. allies and aligning global AI policy with the ideals of the United States. This is to be done specifically in concert with opposing the growing influence of China on the world stage. 

Overall Impact

Altogether, this document demonstrates the position of the current administration as being heavily interested in the development and success of AI. Much of the plan involves stripping down regulations on AI itself or factors that may inhibit its growth, while promoting industry growth and government adoption of AI. If this course remains steady, we can expect to see AI become a far greater element of American industry, and witness its extensive use in governmental operations. 

Further Readings

  • Data Centers and Water Consumption
  • White House AI Action Plan: A First Look
  • AI in China: A Sleeping Giant Awakens

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum...

    AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum...

  • The Stanislavsky projects approach to teaching technology ethics

    The "Stanislavsky projects" approach to teaching technology ethics

  • Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

    Social Context of LLMs - the BigScience Approach, Part 1: Overview of the Governance, Ethics, and L...

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • Regulating Artificial Intelligence: The EU AI Act - Part 1

    Regulating Artificial Intelligence: The EU AI Act - Part 1

  • Can Chatbots Replace Human Mental Health Support?

    Can Chatbots Replace Human Mental Health Support?

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

    ISED Launches AI Risk Management Guide Based on Voluntary Code

  • AI Chatbots: The Future of Socialization

    AI Chatbots: The Future of Socialization

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.