• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

A Snapshot of the Frontiers of Fairness in Machine Learning (Research Summary)

October 5, 2020

Summary contributed by our Artist-in-Residence Falaah Arif Khan. She’s also a Research Fellow in the CVIT Lab at the International Institute of Information Technology.

Link to original paper + authors at the bottom.


Mini-summary: In this succinct review of the scholarship on Fair Machine Learning(ML), Chouldechova and Roth outline the major strides taken towards understanding algorithmic bias, discuss the merits and shortcomings of proposed approaches, and present salient open questions on the frontiers of Fair ML. These include- statistical vs individual notions of Fairness, the dynamics of fairness in socio-technical systems, and the detection and correction of algorithmic bias.

Full summary:

The motivation behind the paper is to highlight the key research directions in Fair ML that provide a scientific foundation for understanding algorithmic bias. These broadly include-  identifying bias encoded in data without access to outcomes (for example we have access to data about who was arrested and not who committed the crime), the utilitarian approach to optimization and how it caters purely to the majority without taking into account minority groups and the ethics of exploration. The role of exploration is a key one since in order to validate our predictions we must have data that enumerates how the outcome in fact played out. This brings up several important questions such as: Is the impact of exploration overwhelmingly felt by one subgroup? If we deem the risks of exploration too high, by how much does a lack of exploration slow learning? Is it ethical to sacrifice the well-being of current populations for the perceived well-being of future populations? 

The next important research direction is one that seeks to formalize the definition of Fairness. There are several proposed definitions, the most popular one being the statistical definition of Fairness. Such a formulation enforces parity in some chosen statistical measure across all groups in the data. The simplicity, assumption-free nature, and the ease with which a statically fair allocation can be verified makes this definition popular. However, a major shortcoming is the proven impossibility of simultaneously equalizing multiple desirable statistical measures. A statistical definition of fairness can also be computationally expensive to model. 

The second popular notion is that of Individual Fairness, which enforces that, for a given task, the algorithm treats individuals who are similar, similarly. While this is richer, semantically, it makes strong assumptions that are difficult to realize practically.  

Chouldechova and Roth then go on to present questions around Intersectional Fairness, namely: how different algorithmic biases compound for individuals who fall at the intersection of multiple protected groups. They also question the feasibility of a ‘good’ metric of fairness and whether such a notion will be accessible while making predictions, and the existence of an ‘agnostic’ notion of Fairness that does not rely on any one measure, but instead takes human feedback to correct for bias. 

Another important consideration is the dynamics of Fairness. Models are seldom deployed in one-shot settings and are usually used in conjunction with several other predictors. In such a setting, how does compositionality affect algorithmic fairness? ie. do individual components that satisfy conditions of ‘fairness’, continue to adhere to the same degree of fairness when composed together to decide a single outcome? 

Another source of dynamism is the impact that algorithmic decision-making systems have on the environment. Models that determine outcomes also influence the incentives of those who interact with them and hence it becomes imperative to consider long-term dynamics when designing ‘fair’ algorithms. We also need to reconcile the individual motives of the different actors in the system and incentivize them to behave ethically.

Lastly, Chouldechova and Roth enumerate open questions in modeling and correcting for bias in data, namely: How does bias arise in data? How do we correct for it? How do we take into account feedback loops, where biased predictions further lead to biased training data in future epochs? Enforcing any notion of fairness on biased data would see a drop in model accuracy and this begs the question of how we go about validating our ‘fair’ predictions.


Original paper by Alexandra Chouldechova and Aaron Roth: https://dl.acm.org/doi/pdf/10.1145/3376898

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • Algorithmic accountability for the public sector

    Algorithmic accountability for the public sector

  • Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

    Russia’s Artificial Intelligence Strategy: The Role of State-Owned Firms

  • Levels of AGI: Operationalizing Progress on the Path to AGI

    Levels of AGI: Operationalizing Progress on the Path to AGI

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

    Public Perceptions of Gender Bias in Large Language Models: Cases of ChatGPT and Ernie

  • Writer-Defined AI Personas for On-Demand Feedback Generation

    Writer-Defined AI Personas for On-Demand Feedback Generation

  • The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

    The Ethics of AI Value Chains: An Approach for Integrating and Expanding AI Ethics Research, Practic...

  • Introduction To Ethical AI Principles

    Introduction To Ethical AI Principles

  • Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

    Research summary: Bring the People Back In: Contesting Benchmark Machine Learning

  • Deployment corrections: An incident response framework for frontier AI models

    Deployment corrections: An incident response framework for frontier AI models

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.