• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • 🇫🇷
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

March 17, 2025

✍️ By Selen Dogan Kosterit.

Selen is a PhD Student in Political Science and a Graduate Lab Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This inaugural piece spotlights Turkey’s AI law proposal, examining its strengths and the gaps in aligning with global AI governance frameworks.


The Turkish Artificial Intelligence Law Proposal

Turkey currently lacks a specific law that directly regulates artificial intelligence (AI). However, a law proposal on AI was submitted to the Grand National Assembly of Turkey in June 2024. The law proposal aims to ensure the safe, ethical, and fair use of AI technologies, guarantee the protection of personal data and privacy rights, and create a regulatory framework for the development and use of AI systems. 

Risk factors, harms, governance strategies, and incentives for compliance

  • Risk factors and harms: The law proposal explicitly states that the fundamental principles of safety, transparency, fairness, accountability, and privacy must be followed in the development and use of AI systems. Given these principles, this proposal governs the AI-related risk factors of safety, transparency, bias, and privacy. Furthermore, by emphasizing the protection of personal data and mandating that AI systems shall not cause harm to users or result in discrimination, this proposal also seeks to prevent AI-related harms, including violations of civil or human rights, harms to safety, and harms stemming from discrimination.
  • Governance strategies: The law proposal requires risk assessments to be carried out during the development and use of AI systems, with special measures implemented for high-risk systems. Additionally, it mandates that high-risk systems be registered with relevant supervisory authorities and undergo a conformity assessment. Moreover, the proposal states that supervisory authorities will be responsible for monitoring compliance and detecting violations. Based on these provisions, the proposal incorporates several governance strategies, such as the evaluation of AI systems through impact assessment and conformity assessment, risk-tiering of AI systems based on impact, registration of high-risk AI systems, and governance development by establishing enforcement mechanisms.
  • Incentives for compliance: The law proposal declares that AI operators will be penalized with fines for engaging in prohibited AI applications, violating obligations, or providing false information. 

Criticism and Areas for Improvement

Although the law proposal is a welcome first step toward establishing AI governance in Turkey, some critics argue that it falls short of international standards in key aspects:

  • The law proposal does not specify which institution will be responsible for monitoring compliance and detecting violations.
  • While the EU AI Act classifies AI systems into four risk categories and sets out specific regulations depending on each category, the Turkish law proposal merely indicates that special measures should be adopted for high-risk systems. It neither defines which AI systems fall into the high-risk category nor provides details on how they should be regulated. 

Recent Developments in AI Governance

Despite the Turkish AI law proposal’s lack of depth and clarity, some recent developments have been promising in creating a strong AI regulatory framework in Turkey.

With the establishment of a Parliamentary AI Research Commission focused on ethical standards, plans to sign the Council of Europe’s Framework Convention on AI and Human Rights, Democracy, and the Rule of Law, and intentions to align Turkish regulations with the EU AI Act, Turkey seems to be on the right path toward building responsible and ethical AI governance. 

Further Reading

  • Navigating the Future of AI Regulation in Türkiye: Key Developments and Expectations
  • Türkiye’s AI research commission eyes unveiling ‘vision document’
  • Türkiye prepares legal framework for artificial intelligence regulation

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

AI Policy Corner: New York City Local Law 144

Canada’s Minister of AI and Digital Innovation is a Historic First. Here’s What We Recommend.

Am I Literate? Redefining Literacy in the Age of Artificial Intelligence

AI Policy Corner: The Texas Responsible AI Governance Act

AI Policy Corner: Singapore’s National AI Strategy 2.0

related posts

  • The Paradox of AI Ethics in Warfare

    The Paradox of AI Ethics in Warfare

  • Battle of Biometrics: The use and issues of facial recognition in Canada

    Battle of Biometrics: The use and issues of facial recognition in Canada

  • AI Policy Corner: Singapore's National AI Strategy 2.0

    AI Policy Corner: Singapore's National AI Strategy 2.0

  • The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

    The Technologists are Not in Control: What the Internet Experience Can Teach us about AI Ethics and ...

  • Global AI Ethics: Examples, Directory, and a Call to Action

    Global AI Ethics: Examples, Directory, and a Call to Action

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

    Risks vs. Harms: Unraveling the AI Terminology Confusion

  • Universal and Transferable Adversarial Attacks on Aligned Language Models

    Universal and Transferable Adversarial Attacks on Aligned Language Models

  • The Artificiality of AI – Why are We Letting Machines Manage Employees?

    The Artificiality of AI – Why are We Letting Machines Manage Employees?

  • The Next Frontier of AI: Lower Emission Processing Using Analog Computers

    The Next Frontier of AI: Lower Emission Processing Using Analog Computers

  • Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

    Exploring the Subtleties of Privacy Protection in Machine Learning Research in Québec 

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.