

✍️ By Selen Dogan Kosterit.
Selen is a PhD Student in Political Science and a Graduate Lab Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This inaugural piece spotlights Turkey’s AI law proposal, examining its strengths and the gaps in aligning with global AI governance frameworks.
Turkey currently lacks a specific law that directly regulates artificial intelligence (AI). However, a law proposal on AI was submitted to the Grand National Assembly of Turkey in June 2024. The law proposal aims to ensure the safe, ethical, and fair use of AI technologies, guarantee the protection of personal data and privacy rights, and create a regulatory framework for the development and use of AI systems.
Risk factors, harms, governance strategies, and incentives for compliance
- Risk factors and harms: The law proposal explicitly states that the fundamental principles of safety, transparency, fairness, accountability, and privacy must be followed in the development and use of AI systems. Given these principles, this proposal governs the AI-related risk factors of safety, transparency, bias, and privacy. Furthermore, by emphasizing the protection of personal data and mandating that AI systems shall not cause harm to users or result in discrimination, this proposal also seeks to prevent AI-related harms, including violations of civil or human rights, harms to safety, and harms stemming from discrimination.
- Governance strategies: The law proposal requires risk assessments to be carried out during the development and use of AI systems, with special measures implemented for high-risk systems. Additionally, it mandates that high-risk systems be registered with relevant supervisory authorities and undergo a conformity assessment. Moreover, the proposal states that supervisory authorities will be responsible for monitoring compliance and detecting violations. Based on these provisions, the proposal incorporates several governance strategies, such as the evaluation of AI systems through impact assessment and conformity assessment, risk-tiering of AI systems based on impact, registration of high-risk AI systems, and governance development by establishing enforcement mechanisms.
- Incentives for compliance: The law proposal declares that AI operators will be penalized with fines for engaging in prohibited AI applications, violating obligations, or providing false information.
Criticism and Areas for Improvement
Although the law proposal is a welcome first step toward establishing AI governance in Turkey, some critics argue that it falls short of international standards in key aspects:
- The law proposal does not specify which institution will be responsible for monitoring compliance and detecting violations.
- While the EU AI Act classifies AI systems into four risk categories and sets out specific regulations depending on each category, the Turkish law proposal merely indicates that special measures should be adopted for high-risk systems. It neither defines which AI systems fall into the high-risk category nor provides details on how they should be regulated.
Recent Developments in AI Governance
Despite the Turkish AI law proposal’s lack of depth and clarity, some recent developments have been promising in creating a strong AI regulatory framework in Turkey.
With the establishment of a Parliamentary AI Research Commission focused on ethical standards, plans to sign the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, and intentions to align Turkish regulations with the EU AI Act, Turkey seems to be on the right path toward building responsible and ethical AI governance.