• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: How Brazil Plans to Govern AI: Reviewing PL 2338/2023

December 9, 2025

✍️By Isadora Argenta.

Isadora is an Undergraduate Student in Political Science and minoring in communication, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece provides an overview of Brazil’s Bill on AI regulation, PL 2338/2023.

Photo credit: Digital Watch Observatory
https://dig.watch/updates/brazil-halts-metas-new-privacy-policy-for-ai-training-citing-serious-privacy-risks 


Brazil has increasingly recognized the importance of regulating artificial intelligence (AI) as it has become more widely known and advanced. The country has introduced several legislative proposals over the years to set frameworks to guide how AI is used and developed. The most recent proposal, PL 2338/2023, aims to establish clear rules for AI use, including citizen protection and risk assessment. 

  1. Framework 

Brazil has begun to create domestic rules for artificial intelligence. The country is currently pursuing a framework on how AI is used as well as developed. PL 2338/2023, which is moving through the legislative process, sets up the structure for regulating AI in Brazil, covering multiple aspects such as citizen protection and risk classification. The bill encourages the development and use of AI without putting people at risk, attempting to be ethical and safe. Under this approach, AI systems must serve people as well as democracy instead of replacing humans or harming them. This means individuals would have control over how AI affects them. The law itself also does not treat all AI applications the same. Before any AI system is sold or used in service, the AI system must be checked by the provider to assess its risk. To manage these differences, AI systems are classified into three categories: “excessive risk” (prohibited), “high-risk” (regulated), and “non-high/non-excessive risk” based on how potentially dangerous an AI system is. Excessive-risk AI systems are those considered too risky to be allowed at all. These are prohibited because they pose threats, such as violating fundamental rights. A high-risk AI is defined and determined as anything that could directly affect individuals’ lives or rights in critical areas such as healthcare and justice. This is because errors in these aspects can cause serious harm. In order to use AI, they have strict rules. Citizens must know when AI is making a decision that affects them, a human must review important outcomes, they must explain how AI works and check for bias, and lastly, some systems may be reviewed by outside experts. The non-high/non-excessive risk category includes AI systems that present lower risks. PL 2338/2023 also supports innovation by letting developers test AI systems in controlled environments called regulatory sandboxes. Although many details will still depend on future regulations, the overall goal of PL 2338/2023 is to build an AI ecosystem in Brazil that is safe and ethical. 

  1. Evolution between the years 

Although PL 2338/2023 is in the process of becoming a bill, Brazil has taken multiple steps over the past several years to regulate AI. In 2020, PL 21/2020 was introduced. PL21/2022 was meant to set up general guidelines for AI development and use. The bill focused on general ethical principles such as fairness and transparency. However, it was drafted before advanced AI technologies existed, making it difficult to anticipate the problems and ethical issues that come with modern AI. As a result, the bill outlined what should be done, but lacked how to actually make it happen. This caused the bill to never be passed. Despite the bill not passing, these earlier efforts gave useful experience and ideas for improvement, helping them create a better and more detailed framework, which is now a part of PL 2338/2023.

  1. Future Outlook 

If PL 2338/2023 does become a law, it will provide a clearer structure system for AI in Brazil. Citizens will be able to gain an oversight of how AI directly affects them. This includes the right to know when AI is making decisions, and the ability to request human review and challenge the outcomes that could be harmful to them. On the other hand, developers and companies will have new responsibilities, such as documenting how their AI works and following rules for safety. This could allow Brazil to become a leader in AI governance in Latin America, showing the possibility of growing AI while protecting citizens’ rights and safety. 

Further Reading:

  • Realizing Brazil’s AI Ambition Through Future-Proof Regulation
  • Brazil’s AI Act: A New Era of AI Regulation
  • Dialogues Between Brazil and the U.S.: Should AI Be Regulated?

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • AI Policy Corner: AI Governance in East Asia: Comparing the AI Acts of South Korea and Japan

    AI Policy Corner: AI Governance in East Asia: Comparing the AI Acts of South Korea and Japan

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

    AI Policy Corner: Texas and New York: Comparing U.S. State-Level AI Laws

  • Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

    Social Context of LLMs - the BigScience Approach, Part 4:Model Governance and Responsible Use

  • Illustration of a coral reef ecosystem

    Tech Futures: Diversity of Thought and Experience: The UN's Scientific Panel on AI

  • AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

    AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

  • AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

    AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

  • From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

    From Promise to Practice: A Glimpse into AI-Driven Approaches to Neuroscience

  • AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

    AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

  • A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

    Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.