• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Transferring Fairness under Distribution Shifts via Fair Consistency Regularization

August 2, 2023

🔬 Research Summary by Bang An, a Ph.D. student at the University of Maryland, College Park, specializing in trustworthy machine learning.

[Original paper by Bang An, Zora Che, Mucong Ding, and Furong Huang]


Overview: This paper addresses the issue of fairness violations in machine learning models when they are deployed in environments different from their training grounds. A practical algorithm with fair consistency regularization as the key component is proposed to ensure model fairness under distribution shifts. Our work is published in NeurIPS 2022.


Introduction

The increasing reliance on ML models in high-stakes tasks has raised a major concern about fairness violations. Although there has been a surge of work that improves algorithmic fairness, most are under the assumption of an identical training and test distribution. In many real-world applications, however, such an assumption is often violated as previously trained fair models are often deployed in a different environment, and fairness collapse has been observed in some recent work. For example, (Schrouff et al., 2022) found that a model that performs fairly according to the metric evaluated in “Hospital A” shows unfairness when applied to “Hospital B.” In this paper, we study how to maintain fairness under distribution shifts.

Key Insights

Research Problem: Transferring Fairness under Distribution Shifts

In this paper,  we consider the case where we have labeled data in the source domain and unlabeled data in the target domain. We can train a fair model with the labeled source data with existing methods such as adversarial learning (Madras et al., 2018). However, we observe that the model is no longer fair in the target domain. We investigate how to adapt the fair source model to a target domain with the goal of achieving both accuracy and fairness in both domains. 

What are Distribution Shifts?

We characterize distribution shifts by assuming two domains share the same underlying data generation process where data is generated from a set of latent factors with a fixed generative model, and the shift is caused by the shift of the marginal distribution of some factors. We categorize distribution shifts into three types: 

1) Domain shift, where source and target distributions comprise data from related but distinct domains (e.g., train a model in hospital A but test it in hospital B). 

2) Subpopulation shift, where two domains overlap, but relative proportions of subpopulations differ (e.g., the proportion of female candidates increases at test time). 

3) Hybrid shift, where domain shift and subpopulation shift happen simultaneously. 

We find domain shift more challenging for transferring fairness since the model’s performance is unpredictable in unseen domains. Our analysis suggests we encourage consistent fairness under different factor values.

Our Approach: A Self-training Method with Fair Consistency Regularization

We draw inspiration from recent progress on self-training in transferring accuracy under domain shifts (Wei et al., 2020). As illustrated in Fig 2, we suppose source and target domains are faces from two different datasets, and image transformations (e.g., illumination change) can connect domains. Existing works have shown that consistency regularization, which encourages consistent predictions under transformations of the same input, can propagate labels from source to target, thus transferring accuracy. However, they do not consider fairness.

Taking demography into consideration, we propose fair consistency regularization. Specifically, we encourage similar consistency in different groups.  By reweighting the consistency loss of each group dynamically according to the model’s performance, the algorithm encourages the model to pay more attention to the high-error group while training. Our method results in a model that is fair in source and has similar consistency across groups.  It directly results in similar accuracy across groups in the target domain so that we can transfer fairness. We evaluate our method under different types of distribution shifts with the synthetic and real datasets. For example, Fig 3 shows that our method significantly outperforms others in achieving accuracy and fairness simultaneously.

Between the lines

We investigate an important but less explored real-world problem. Our algorithm has the potential to greatly improve the fairness of machine learning models, especially when they are deployed in environments different from their training grounds. However, like other self-training methods, one limitation of our method is the reliance on a well-defined data transformation set. Future work will relax this limitation for application to more real-world problems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

related posts

  • Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

    Deepfakes and Domestic Violence: Perpetrating Intimate Partner Abuse Using Video Technology

  • The Sociology of Race and Digital Society

    The Sociology of Race and Digital Society

  • On the Perception of Difficulty: Differences between Humans and AI

    On the Perception of Difficulty: Differences between Humans and AI

  • Private Training Set Inspection in MLaaS

    Private Training Set Inspection in MLaaS

  • Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

    Towards User-Centered Metrics for Trustworthy AI in Immersive Cyberspace

  • Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

    Why was your job application rejected: Bias in Recruitment Algorithms? (Part 1)

  • A Case Study: Increasing AI Ethics Maturity in a Startup

    A Case Study: Increasing AI Ethics Maturity in a Startup

  • Response to the AHRC and WEF regarding Responsible Innovation in AI

    Response to the AHRC and WEF regarding Responsible Innovation in AI

  • The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

    The European Commission’s Artificial Intelligence Act (Stanford HAI Policy Brief)

  • Fusing Art and Engineering for a more Humane Tech Future

    Fusing Art and Engineering for a more Humane Tech Future

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.