• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation

November 24, 2025

✍️By Erik Charles Lincoln Vitek.

Erik is an Undergraduate Student in Political Science and Aviation Finance as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyzes the two stage process to AI regulation and development outlined in Ukraine’s White Paper on Artificial Intelligence Regulation (Version for Consultation), explaining how these two stages connect into the nation’s broader geopolitical context.

Photo credit: Ground Picture/Shutter Stock
https://ge.usembassy.gov/how-ukraine-and-u-s-tech-firms-build-for-the-future/


Amidst its nearly 4-year war with the Russian Federation, Ukraine has published a whitepaper from which it aims to regulate its commercial artificial intelligence (AI) sector. Policies derived from the paper aim to establish an environment in which Ukraine’s goals of business competitiveness, human rights protection, and European integration are supported while protecting its defense AI sector from regulation. A bottom-up approach is proposed that encompasses two stages; the first being a preparatory stage that allows for industry and state planning followed by a second stage that introduces binding statutes aiming to gradually replicate the EU’s Artificial Intelligence Regulation Act.

Stage 1

In its first stage towards regulation, Ukraine proposes to implement training tools and soft law tools that encourage participation from all stakeholders and to develop a standard methodology for assessing human rights impacts from AI products. These steps will provide a central basis towards developing a “regulatory sandbox” or providing an advisory platform for legal issues related to AI. In the following stage, the state aims to provide select AI projects with a controlled environment to develop and test products under the guardianship of the government which will further aid the state’s ability to evaluate and monitor AI products. Those not selected for direct state engagement will have their development aided through legal assistance aimed at compliance with future legislation.  Given limited state resources, this process relies on heavy involvement and buy-in from the private sector. Understanding this, Ukraine also intends to a solicit a partnership with the leading AI firms and Ukrainian NGOs to initiate the Trusted Flagger concept (as suggested in the EU’s Digital Service Act), where potential violations in connection with the use of AI technologies, will be mediated through trusted third parties and the platform itself. Private AI developers are also encouraged to voluntarily participate in self-labeling and code of conduct campaigns.This process aims to promote transparency to consumers through a system similar to the EU’s food labeling program; highlighting potential biases, privacy measures, and training data processes, as well as establishing a system of self-regulation without impeding business through mandatory reporting. Finally, to track and ensure access to these tools and keep stakeholders informed, the state will develop a centralized hub through the creation of a web portal.         

Stage 2

When Ukraine graduates to the stage of legal implementation, it aims to enact regulations that mirror the EU’s Artificial Intelligence Regulation Act  in accordance with Ukraine’s overriding political goal of accession into the bloc and complying with the need for state AI regulation detailed in an initial meeting between Ukrainian and EU representatives. Work to amend Ukraine’s laws would begin following the adoption of the AI Regulation Act by the European Union with gradual implementation emphasized to ensure general compliance and thus accession, but allowing for adequate preparation by Ukrainian private and state entities.

Future Outlook

In its whitepaper, Ukraine set out to reach standards it agreed to at the inaugural AI Safety Summit , acknowledge the challenges in doing so, and outline a base from which policy can grow whilst still promoting technological/economic innovation. The outlook for its goal of integration with the European AI framework may be simplified due to continued hesitancy surrounding the European Union’s Artificial Intelligence Regulation Act and increasing EU sentiment towards  relaxation of  regulation on AI development in favor of commercial growth. If Ukraine is to follow through on its set goals, continued European partnership and domestic political evolution surrounding artificial intelligence remain key.

Further Reading:

  • Legal Regulation of Artificial Intelligence in Ukraine: Challenges and Prospects
  • Legal Aspects and State Regulation of the Use of Artificial Intelligence
  • NOYB – European Center for Digital Rights: GDPR Reform Draft Analysis
  • The EU promised to lead on regulating artificial intelligence. Now it’s hitting pause.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

  • The Paris AI Summit: Deregulation, Fear, and Surveillance

    The Paris AI Summit: Deregulation, Fear, and Surveillance

  • The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

    The Death of Canada’s Artificial Intelligence and Data Act: What Happened, and What’s Next for AI Re...

  • Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

    Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

  • AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

    AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

  • AI Policy Corner: The Kenya National AI Strategy

    AI Policy Corner: The Kenya National AI Strategy

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • AI Policy Corner: The Colorado State Deepfakes Act

    AI Policy Corner: The Colorado State Deepfakes Act

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.