• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: AI Governance in East Asia: Comparing the AI Acts of South Korea and Japan

January 19, 2026

✍️By Selen Dogan Kosterit.

Selen is a PhD Student in Political Science and a Graduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece spotlights South Korea’s AI Framework Act and compares it with Japan’s AI Promotion Act.


South Korea’s “Framework Act on the Development of Artificial Intelligence and Establishment of Trust Foundation” (hereinafter referred to as South Korea’s AI Framework Act), which was enacted in January 2025 and recently revised, will take effect on 22 January 2026.

As South Korea prepares to become the first country to enforce a comprehensive AI regulatory framework, attention has turned to AI governance in East Asia. In a previous article, I wrote about Japan’s AI Promotion Act, which was enacted in May 2025. This article focuses on South Korea’s AI Framework Act and compares it with Japan’s AI Promotion Act.

South Korea’s AI Framework Act 

South Korea’s AI Framework Act aims to strengthen national competitiveness by fostering the sound development of AI and building public trust, while ensuring the protection of human rights and dignity.

The Act mandates establishing an AI Basic Plan every three years and creating a National AI Committee, as well as forming an AI Policy Center and an AI Safety Research Institute.

Moreover, the Act outlines the government’s responsibilities regarding AI technology and industry advancement. Key responsibilities include promoting the development and safe use of AI technology, supporting companies regarding the introduction and utilization of AI, fostering startups in the AI industry, attracting AI talent from overseas, facilitating international cooperation and overseas market entry, and securing financial resources to achieve these goals.

The Act was recently revised on 30 December 2025, just weeks before coming into force. Some key revisions include renaming the National AI Committee as the National AI Strategy Committee and strengthening its functions, establishing a legal basis for AI research institutes, ensuring AI accessibility for vulnerable groups, and promoting AI adoption in the public sector.

Comparing the AI Acts of South Korea and Japan

The AI Acts of South Korea and Japan both establish overarching AI regulatory frameworks and create high-level AI bodies. Additionally, both AI Acts share the same goal: promoting the AI industry and technology, leveraging AI for economic growth, and enhancing national competitiveness. This reflects Japan’s aspiration to be the most AI-friendly country in the world, as well as South Korea’s goal to become a global top-three AI power alongside the US and China.

Despite their mutual focus on innovation and development, the AI Acts of South Korea and Japan have significant differences related to risk-tiering and penalties.

First, while Japan’s AI Act does not classify AI systems by risk level, the South Korean Act specifically defines high-impact AI systems as those with the potential to significantly affect human life, physical safety, or fundamental rights. Furthermore, the South Korean Act outlines additional obligations for AI business operators providing high-impact AI. To begin with, businesses shall obtain the necessary verification and certification before providing products and services that qualify as high-impact AI. Businesses must also establish risk management plans, provide user protection measures, ensure human supervision, and conduct ex ante impact assessments on basic human rights.

Moreover, the Act requires businesses to disclose all AI-generated content through labeling such as watermarks.   

Second, unlike Japan’s AI Act, which does not impose any penalties for non-compliance, the South Korean Act introduces administrative fines for businesses that violate certain articles. Most notably, AI business operators that provide high-impact AI or generative AI are obligated to inform their users that the product or service is AI-based, and failing to comply will result in financial penalties. 

In summary, the Japanese Act follows a light-touch approach that prioritizes innovation, while the South Korean Act strikes a balance between economic growth and regulation by addressing high-risk areas. Importantly, both East Asian countries adopt a more relaxed model compared to the EU AI Act, which establishes more comprehensive obligations and prohibitions based on a multi-tiered risk classification.

Further Reading

  • Analyzing South Korea’s Framework Act on the Development of AI
  • South Korea Commits to Full Stack AI Growth
  • Why South Korea is vying to be first to regulate AI
  • AI law set to be implemented next month amid biz concerns
  • How newly revised AI Basic Act will reshape Korea’s AI landscape

Photo credit: The Diplomat

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

    AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

  • AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

    AI Policy Corner: AI and Security in Africa: Assessing the African Union’s Continental AI Strategy

  • From Case Law to Code: Evaluating AI’s Role in the Justice System

    From Case Law to Code: Evaluating AI’s Role in the Justice System

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

  • AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

    AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

  • Can Chatbots Replace Human Mental Health Support?

    Can Chatbots Replace Human Mental Health Support?

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

    Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.