• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

December 9, 2025

✍️By Isadora Argenta.

Isadora is an Undergraduate Student in Political Science and minoring in communication, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece provides an overview of Brazil’s Bill on AI regulation, PL 2338/2023.

Photo credit: Digital Watch Observatory
https://dig.watch/updates/brazil-halts-metas-new-privacy-policy-for-ai-training-citing-serious-privacy-risks 


Brazil has increasingly recognized the importance of regulating artificial intelligence (AI) as it has become more widely known and advanced. The country has introduced several legislative proposals over the years to set frameworks to guide how AI is used and developed. The most recent proposal, PL 2338/2023, aims to establish clear rules for AI use, including citizen protection and risk assessment. 

  1. Framework 

Brazil has begun to create domestic rules for artificial intelligence. The country is currently pursuing a framework on how AI is used as well as developed. PL 2338/2023, which is moving through the legislative process, sets up the structure for regulating AI in Brazil, covering multiple aspects such as citizen protection and risk classification. The bill encourages the development and use of AI without putting people at risk, attempting to be ethical and safe. Under this approach, AI systems must serve people as well as democracy instead of replacing humans or harming them. This means individuals would have control over how AI affects them. The law itself also does not treat all AI applications the same. Before any AI system is sold or used in service, the AI system must be checked by the provider to assess its risk. To manage these differences, AI systems are classified into three categories: “excessive risk” (prohibited), “high-risk” (regulated), and “non-high/non-excessive risk” based on how potentially dangerous an AI system is. Excessive-risk AI systems are those considered too risky to be allowed at all. These are prohibited because they pose threats, such as violating fundamental rights. A high-risk AI is defined and determined as anything that could directly affect individuals’ lives or rights in critical areas such as healthcare and justice. This is because errors in these aspects can cause serious harm. In order to use AI, they have strict rules. Citizens must know when AI is making a decision that affects them, a human must review important outcomes, they must explain how AI works and check for bias, and lastly, some systems may be reviewed by outside experts. The non-high/non-excessive risk category includes AI systems that present lower risks. PL 2338/2023 also supports innovation by letting developers test AI systems in controlled environments called regulatory sandboxes. Although many details will still depend on future regulations, the overall goal of PL 2338/2023 is to build an AI ecosystem in Brazil that is safe and ethical. 

  1. Evolution between the years 

Although PL 2338/2023 is in the process of becoming a bill, Brazil has taken multiple steps over the past several years to regulate AI. In 2020, PL 21/2020 was introduced. PL21/2022 was meant to set up general guidelines for AI development and use. The bill focused on general ethical principles such as fairness and transparency. However, it was drafted before advanced AI technologies existed, making it difficult to anticipate the problems and ethical issues that come with modern AI. As a result, the bill outlined what should be done, but lacked how to actually make it happen. This caused the bill to never be passed. Despite the bill not passing, these earlier efforts gave useful experience and ideas for improvement, helping them create a better and more detailed framework, which is now a part of PL 2338/2023.

  1. Future Outlook 

If PL 2338/2023 does become a law, it will provide a clearer structure system for AI in Brazil. Citizens will be able to gain an oversight of how AI directly affects them. This includes the right to know when AI is making decisions, and the ability to request human review and challenge the outcomes that could be harmful to them. On the other hand, developers and companies will have new responsibilities, such as documenting how their AI works and following rules for safety. This could allow Brazil to become a leader in AI governance in Latin America, showing the possibility of growing AI while protecting citizens’ rights and safety. 

Further Reading:

  • Realizing Brazil’s AI Ambition Through Future-Proof Regulation
  • Brazil’s AI Act: A New Era of AI Regulation
  • Dialogues Between Brazil and the U.S.: Should AI Be Regulated?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Teaching Responsible AI in a Time of Hype

    Teaching Responsible AI in a Time of Hype

  • AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation

    AI Policy Corner: Reviewing Ukraine’s Whitepaper on Artificial Intelligence Regulation

  • Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

    Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

  • Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

    Social Context of LLMs - the BigScience Approach, Part 2: Project Ethical and Legal Grounding

  • Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

    Social Context of LLMs - the BigScience Approach, Part 3: Data Governance and Representation

  • AI Policy Corner: The Texas Responsible AI Governance Act

    AI Policy Corner: The Texas Responsible AI Governance Act

  • Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

    Regulating Artificial Intelligence: The EU AI Act - Part 1 (i)

  • From Case Law to Code: Evaluating AI’s Role in the Justice System

    From Case Law to Code: Evaluating AI’s Role in the Justice System

  • AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

    AI Policy Corner: U.S. Copyright Guidance on Works Created with AI

  • The coming AI 'culture war'

    The coming AI 'culture war'

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.