
✍️By Erik Charles Lincoln Vitek.
Erik is an Undergraduate Student in Political Science and Aviation Finance as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece analyzes the two stage process to AI regulation and development outlined in Ukraine’s White Paper on Artificial Intelligence Regulation (Version for Consultation), explaining how these two stages connect into the nation’s broader geopolitical context.
Photo credit: Ground Picture/Shutter Stock
https://ge.usembassy.gov/how-ukraine-and-u-s-tech-firms-build-for-the-future/
Amidst its nearly 4-year war with the Russian Federation, Ukraine has published a whitepaper from which it aims to regulate its commercial artificial intelligence (AI) sector. Policies derived from the paper aim to establish an environment in which Ukraine’s goals of business competitiveness, human rights protection, and European integration are supported while protecting its defense AI sector from regulation. A bottom-up approach is proposed that encompasses two stages; the first being a preparatory stage that allows for industry and state planning followed by a second stage that introduces binding statutes aiming to gradually replicate the EU’s Artificial Intelligence Regulation Act.
Stage 1
In its first stage towards regulation, Ukraine proposes to implement training tools and soft law tools that encourage participation from all stakeholders and to develop a standard methodology for assessing human rights impacts from AI products. These steps will provide a central basis towards developing a “regulatory sandbox” or providing an advisory platform for legal issues related to AI. In the following stage, the state aims to provide select AI projects with a controlled environment to develop and test products under the guardianship of the government which will further aid the state’s ability to evaluate and monitor AI products. Those not selected for direct state engagement will have their development aided through legal assistance aimed at compliance with future legislation. Given limited state resources, this process relies on heavy involvement and buy-in from the private sector. Understanding this, Ukraine also intends to a solicit a partnership with the leading AI firms and Ukrainian NGOs to initiate the Trusted Flagger concept (as suggested in the EU’s Digital Service Act), where potential violations in connection with the use of AI technologies, will be mediated through trusted third parties and the platform itself. Private AI developers are also encouraged to voluntarily participate in self-labeling and code of conduct campaigns.This process aims to promote transparency to consumers through a system similar to the EU’s food labeling program; highlighting potential biases, privacy measures, and training data processes, as well as establishing a system of self-regulation without impeding business through mandatory reporting. Finally, to track and ensure access to these tools and keep stakeholders informed, the state will develop a centralized hub through the creation of a web portal.
Stage 2
When Ukraine graduates to the stage of legal implementation, it aims to enact regulations that mirror the EU’s Artificial Intelligence Regulation Act in accordance with Ukraine’s overriding political goal of accession into the bloc and complying with the need for state AI regulation detailed in an initial meeting between Ukrainian and EU representatives. Work to amend Ukraine’s laws would begin following the adoption of the AI Regulation Act by the European Union with gradual implementation emphasized to ensure general compliance and thus accession, but allowing for adequate preparation by Ukrainian private and state entities.
Future Outlook
In its whitepaper, Ukraine set out to reach standards it agreed to at the inaugural AI Safety Summit , acknowledge the challenges in doing so, and outline a base from which policy can grow whilst still promoting technological/economic innovation. The outlook for its goal of integration with the European AI framework may be simplified due to continued hesitancy surrounding the European Union’s Artificial Intelligence Regulation Act and increasing EU sentiment towards relaxation of regulation on AI development in favor of commercial growth. If Ukraine is to follow through on its set goals, continued European partnership and domestic political evolution surrounding artificial intelligence remain key.
Further Reading:
- Legal Regulation of Artificial Intelligence in Ukraine: Challenges and Prospects
- Legal Aspects and State Regulation of the Use of Artificial Intelligence
- NOYB – European Center for Digital Rights: GDPR Reform Draft Analysis
- The EU promised to lead on regulating artificial intelligence. Now it’s hitting pause.
