• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

AI Policy Corner: An Overview of Illinois Public Act 103-0804

March 16, 2026

✍️By Isadora Argenta.

Isadora is an Undergraduate Student in Political Science and minoring in Communication and Portuguese, as well as an Undergraduate Affiliate at the Governance and Responsible AI Lab (GRAIL), Purdue University.


📌 Editor’s Note: This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This article analyzes the Illinois Public Act 103-0804 and its regulation of how AI is deployed in the workplace.


Within newer generations, Artificial Intelligence (AI) is being implemented not only in our day-to-day activities but also in our workplaces. Companies have started to apply AI to help screen job applications, and to help decide future promotions. Consequently, these technologies are affecting individual career opportunities and professional outcomes. Researchers explain that these systems are likely to be distributed unevenly between workers and industry, meaning that some groups may benefit more than others. Due to these biases, governments are beginning to regulate how AI is used in the workplace. The Illinois Public Act 103-0804 is a prime example of these regulatory efforts.

Definitions 

The Illinois Public Act 103-0804 defines Artificial Intelligence as a machine-based system that takes input and produces outputs. Examples of those would be recommending a decision or content that can further influence environments. The Act also defines generative Artificial Intelligence as an automated computing system that produces content when prompted by a human user.

Discrimination within Artificial Intelligence 

A principal component of the Illinois Public Act 103-0804 is the introduction of rules about how AI can be used in employment decisions. The Act reveals that if an AI system results in discrimination against individuals who are protected under the Illinois Human Rights Act, it would be a civil rights violation for the employer that used such a system. Ultimately, this means that employers cannot rely on AI systems, as this could lead to unfair treatment of employees or job applicants.

This rule is significant as AI systems can unintentionally amplify existing inequalities in the workforce. Since AI usually rely on large datasets to generate recommendations, the patterns within the data can further impact the results. If any historical employment data shows inequality or misrepresentation, automatic systems may produce those same patterns when assisting with making workforce decisions. 

The Illinois Public Act 103-0804  furthermore describes differential ways that discrimination can occur indirectly through the use of certain types of data. For instance, with the use of zip codes, the act states that employers cannot use zip codes as a proxy for protected classes when AI is used in employment decisions. This recognizes that some information may indirectly reflect on protected characteristics, which could lead to discriminatory outcomes. The Act also has a strong focus on transparency, as it articulates that employers must provide notice to employees if AI is being used in employment decisions covered by the law, and the failure to do so is considered a violation. By requiring transparency, the law attempts to ensure that workers remain more aware of how these systems may influence decisions about their employment. 

Importance 

Although the Illinois Human Rights Act already prohibits discrimination in hiring and workplace decisions, the Public Act of 103-0804 clarifies that these protections will still apply even when decisions are made using automated systems. This means employers cannot avoid responsibility for discriminatory outcomes just because the decision was influenced by an algorithm. Employers must remain responsible for ensuring that technologies comply with civil rights protection.

It is also important to acknowledge that the benefits of generative AI are not evenly distributed, as some workers may gain more from productive improvements while others may bear greater disadvantages. By requiring employers to account for how AI impacts employees, the Act aims to prevent unequal effects. 

Further Reading: 

  • How different states are approaching AI
  • How artificial intelligence impacts the US labor market
  • What is AI bias?

Image credit: 2Civility

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

related posts

  • AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

    AI Policy Corner: Automating Licensed Professions: Assessing Health Technology and Other Industries

  • Who's watching? What you need to know about personal data security

    Who's watching? What you need to know about personal data security

  • AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

    AI Policy Corner: Transparency in AI Lab Governance: Comparing OpenAI and Anthropic’s Approaches

  • Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

    Beyond Consultation: Building Inclusive AI Governance for Canada's Democratic Future

  • AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

    AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legisla...

  • A person sits in an armchair and writes in a notebook, with speech bubbles showing indefinite strokes and then a lightbulb. Nearby, a table with a laptop showing an LLM chatbot interface and a cup.

    AI Policy Corner: Layered Governance in AI Labs: Defining Boundaries Across the Policy Stack

  • ALL IN Conference 2025: Four Key Takeaways from Montreal

    ALL IN Conference 2025: Four Key Takeaways from Montreal

  • A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

    Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

  • AI Policy Corner: Japan’s AI Promotion Act

    AI Policy Corner: Japan’s AI Promotion Act

  • Can we blame a chatbot if it goes wrong?

    Can we blame a chatbot if it goes wrong?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.