• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

October 5, 2020

Summary contributed by our Artist-in-Residence Falaah Arif Khan. She’s also a Research Fellow in the CVIT Lab at the International Institute of Information Technology.

Link to original paper + authors at the bottom.


Mini-summary: In this succinct review of the scholarship on Fair Machine Learning(ML), Chouldechova and Roth outline the major strides taken towards understanding algorithmic bias, discuss the merits and shortcomings of proposed approaches, and present salient open questions on the frontiers of Fair ML. These include- statistical vs individual notions of Fairness, the dynamics of fairness in socio-technical systems, and the detection and correction of algorithmic bias.

Full summary:

The motivation behind the paper is to highlight the key research directions in Fair ML that provide a scientific foundation for understanding algorithmic bias. These broadly include-  identifying bias encoded in data without access to outcomes (for example we have access to data about who was arrested and not who committed the crime), the utilitarian approach to optimization and how it caters purely to the majority without taking into account minority groups and the ethics of exploration. The role of exploration is a key one since in order to validate our predictions we must have data that enumerates how the outcome in fact played out. This brings up several important questions such as: Is the impact of exploration overwhelmingly felt by one subgroup? If we deem the risks of exploration too high, by how much does a lack of exploration slow learning? Is it ethical to sacrifice the well-being of current populations for the perceived well-being of future populations? 

The next important research direction is one that seeks to formalize the definition of Fairness. There are several proposed definitions, the most popular one being the statistical definition of Fairness. Such a formulation enforces parity in some chosen statistical measure across all groups in the data. The simplicity, assumption-free nature, and the ease with which a statically fair allocation can be verified makes this definition popular. However, a major shortcoming is the proven impossibility of simultaneously equalizing multiple desirable statistical measures. A statistical definition of fairness can also be computationally expensive to model. 

The second popular notion is that of Individual Fairness, which enforces that, for a given task, the algorithm treats individuals who are similar, similarly. While this is richer, semantically, it makes strong assumptions that are difficult to realize practically.  

Chouldechova and Roth then go on to present questions around Intersectional Fairness, namely: how different algorithmic biases compound for individuals who fall at the intersection of multiple protected groups. They also question the feasibility of a ‘good’ metric of fairness and whether such a notion will be accessible while making predictions, and the existence of an ‘agnostic’ notion of Fairness that does not rely on any one measure, but instead takes human feedback to correct for bias. 

Another important consideration is the dynamics of Fairness. Models are seldom deployed in one-shot settings and are usually used in conjunction with several other predictors. In such a setting, how does compositionality affect algorithmic fairness? ie. do individual components that satisfy conditions of ‘fairness’, continue to adhere to the same degree of fairness when composed together to decide a single outcome? 

Another source of dynamism is the impact that algorithmic decision-making systems have on the environment. Models that determine outcomes also influence the incentives of those who interact with them and hence it becomes imperative to consider long-term dynamics when designing ‘fair’ algorithms. We also need to reconcile the individual motives of the different actors in the system and incentivize them to behave ethically.

Lastly, Chouldechova and Roth enumerate open questions in modeling and correcting for bias in data, namely: How does bias arise in data? How do we correct for it? How do we take into account feedback loops, where biased predictions further lead to biased training data in future epochs? Enforcing any notion of fairness on biased data would see a drop in model accuracy and this begs the question of how we go about validating our ‘fair’ predictions.


Original paper by Alexandra Chouldechova and Aaron Roth: https://dl.acm.org/doi/pdf/10.1145/3376898

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

related posts

  • HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

    HAI Weekly Seminar Series: Decolonizing AI with Sabelo Mhlambi

  • Government AI Readiness 2021 Index

    Government AI Readiness 2021 Index

  • Algorithmic Domination in the Gig Economy

    Algorithmic Domination in the Gig Economy

  • Justice in Misinformation Detection Systems

    Justice in Misinformation Detection Systems

  • AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

    AI Consent Futures: A Case Study on Voice Data Collection with Clinicians

  • Research Summary: The cognitive science of fake news

    Research Summary: The cognitive science of fake news

  • Can We Teach AI Robots How to Be Human?

    Can We Teach AI Robots How to Be Human?

  • AI supply chains make it easy to disavow ethical accountability

    AI supply chains make it easy to disavow ethical accountability

  • Research summary: Health Care, Capabilities, and AI Assistive Technologies

    Research summary: Health Care, Capabilities, and AI Assistive Technologies

  • Performative Power

    Performative Power

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.