• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

March 17, 2025

✍️ By Selen Dogan Kosterit.

Selen is a PhD Student in Political Science and a Graduate Lab Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This inaugural piece spotlights Turkey’s AI law proposal, examining its strengths and the gaps in aligning with global AI governance frameworks.


Turkey currently lacks a specific law that directly regulates artificial intelligence (AI). However, a law proposal on AI was submitted to the Grand National Assembly of Turkey in June 2024. The law proposal aims to ensure the safe, ethical, and fair use of AI technologies, guarantee the protection of personal data and privacy rights, and create a regulatory framework for the development and use of AI systems. 

Risk factors, harms, governance strategies, and incentives for compliance

  • Risk factors and harms: The law proposal explicitly states that the fundamental principles of safety, transparency, fairness, accountability, and privacy must be followed in the development and use of AI systems. Given these principles, this proposal governs the AI-related risk factors of safety, transparency, bias, and privacy. Furthermore, by emphasizing the protection of personal data and mandating that AI systems shall not cause harm to users or result in discrimination, this proposal also seeks to prevent AI-related harms, including violations of civil or human rights, harms to safety, and harms stemming from discrimination.
  • Governance strategies: The law proposal requires risk assessments to be carried out during the development and use of AI systems, with special measures implemented for high-risk systems. Additionally, it mandates that high-risk systems be registered with relevant supervisory authorities and undergo a conformity assessment. Moreover, the proposal states that supervisory authorities will be responsible for monitoring compliance and detecting violations. Based on these provisions, the proposal incorporates several governance strategies, such as the evaluation of AI systems through impact assessment and conformity assessment, risk-tiering of AI systems based on impact, registration of high-risk AI systems, and governance development by establishing enforcement mechanisms.
  • Incentives for compliance: The law proposal declares that AI operators will be penalized with fines for engaging in prohibited AI applications, violating obligations, or providing false information. 

Criticism and Areas for Improvement

Although the law proposal is a welcome first step toward establishing AI governance in Turkey, some critics argue that it falls short of international standards in key aspects:

  • The law proposal does not specify which institution will be responsible for monitoring compliance and detecting violations.
  • While the EU AI Act classifies AI systems into four risk categories and sets out specific regulations depending on each category, the Turkish law proposal merely indicates that special measures should be adopted for high-risk systems. It neither defines which AI systems fall into the high-risk category nor provides details on how they should be regulated. 

Recent Developments in AI Governance

Despite the Turkish AI law proposal’s lack of depth and clarity, some recent developments have been promising in creating a strong AI regulatory framework in Turkey.

With the establishment of a Parliamentary AI Research Commission focused on ethical standards, plans to sign the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, and intentions to align Turkish regulations with the EU AI Act, Turkey seems to be on the right path toward building responsible and ethical AI governance. 

Further Reading

  • Navigating the Future of AI Regulation in Türkiye: Key Developments and Expectations
  • Türkiye’s AI research commission eyes unveiling ‘vision document’
  • Türkiye prepares legal framework for artificial intelligence regulation

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: Singapore’s National AI Strategy 2.0

AI Governance in a Competitive World: Balancing Innovation, Regulation and Ethics | Point Zero Forum 2025

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

related posts

  • Global AI Ethics: Examples, Directory, and a Call to Action

    Global AI Ethics: Examples, Directory, and a Call to Action

  • Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

    Democracy, epistemic agency, and AI: Political Epistemology in Times of Artificial Intelligence

  • Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

    Acceptable Risks in Europe’s Proposed AI Act: Reasonableness and Other Principles for Deciding How M...

  • Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

    Outsourced & Automated: How AI Companies Have Taken Over Government Decision-Making

  • Conceptualizing the Relationship between AI Explanations and User Agency

    Conceptualizing the Relationship between AI Explanations and User Agency

  • Never trust, always verify: a roadmap for Trustworthy AI?

    Never trust, always verify: a roadmap for Trustworthy AI?

  • The AI Junkyard: Thinking Through the Lifecycle of AI Systems

    The AI Junkyard: Thinking Through the Lifecycle of AI Systems

  • Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

    Is ChatGPT for everyone? Seeing beyond the hype toward responsible use in education

  • Regulating computer vision & the ongoing relevance of AI ethics

    Regulating computer vision & the ongoing relevance of AI ethics

  • Careless Whisper: Speech-to-text Hallucination Harms

    Careless Whisper: Speech-to-text Hallucination Harms

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.