
✍️By Isadora Argenta.
Isadora is an Undergraduate Student in Political Science and minoring in communication, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.
📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This piece provides an overview of Brazil’s Bill on AI regulation, PL 2338/2023.
Photo credit: Digital Watch Observatory
https://dig.watch/updates/brazil-halts-metas-new-privacy-policy-for-ai-training-citing-serious-privacy-risks
Brazil has increasingly recognized the importance of regulating artificial intelligence (AI) as it has become more widely known and advanced. The country has introduced several legislative proposals over the years to set frameworks to guide how AI is used and developed. The most recent proposal, PL 2338/2023, aims to establish clear rules for AI use, including citizen protection and risk assessment.
- Framework
Brazil has begun to create domestic rules for artificial intelligence. The country is currently pursuing a framework on how AI is used as well as developed. PL 2338/2023, which is moving through the legislative process, sets up the structure for regulating AI in Brazil, covering multiple aspects such as citizen protection and risk classification. The bill encourages the development and use of AI without putting people at risk, attempting to be ethical and safe. Under this approach, AI systems must serve people as well as democracy instead of replacing humans or harming them. This means individuals would have control over how AI affects them. The law itself also does not treat all AI applications the same. Before any AI system is sold or used in service, the AI system must be checked by the provider to assess its risk. To manage these differences, AI systems are classified into three categories: “excessive risk” (prohibited), “high-risk” (regulated), and “non-high/non-excessive risk” based on how potentially dangerous an AI system is. Excessive-risk AI systems are those considered too risky to be allowed at all. These are prohibited because they pose threats, such as violating fundamental rights. A high-risk AI is defined and determined as anything that could directly affect individuals’ lives or rights in critical areas such as healthcare and justice. This is because errors in these aspects can cause serious harm. In order to use AI, they have strict rules. Citizens must know when AI is making a decision that affects them, a human must review important outcomes, they must explain how AI works and check for bias, and lastly, some systems may be reviewed by outside experts. The non-high/non-excessive risk category includes AI systems that present lower risks. PL 2338/2023 also supports innovation by letting developers test AI systems in controlled environments called regulatory sandboxes. Although many details will still depend on future regulations, the overall goal of PL 2338/2023 is to build an AI ecosystem in Brazil that is safe and ethical.
- Evolution between the years
Although PL 2338/2023 is in the process of becoming a bill, Brazil has taken multiple steps over the past several years to regulate AI. In 2020, PL 21/2020 was introduced. PL21/2022 was meant to set up general guidelines for AI development and use. The bill focused on general ethical principles such as fairness and transparency. However, it was drafted before advanced AI technologies existed, making it difficult to anticipate the problems and ethical issues that come with modern AI. As a result, the bill outlined what should be done, but lacked how to actually make it happen. This caused the bill to never be passed. Despite the bill not passing, these earlier efforts gave useful experience and ideas for improvement, helping them create a better and more detailed framework, which is now a part of PL 2338/2023.
- Future Outlook
If PL 2338/2023 does become a law, it will provide a clearer structure system for AI in Brazil. Citizens will be able to gain an oversight of how AI directly affects them. This includes the right to know when AI is making decisions, and the ability to request human review and challenge the outcomes that could be harmful to them. On the other hand, developers and companies will have new responsibilities, such as documenting how their AI works and following rules for safety. This could allow Brazil to become a leader in AI governance in Latin America, showing the possibility of growing AI while protecting citizens’ rights and safety.
Further Reading:
