• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Bias Amplification Enhances Minority Group Performance

February 1, 2024

🔬 Research Summary by Gaotang Li and Jiarui Liu.

Gaotang Li is a senior undergraduate student studying Computer Science and Mathematics at the University of Michigan.

Jiarui Liu is a first-year Master’s student in Intelligent Information Systems at the Language Technologies Institute at Carnegie Mellon University.

[Original paper by Gaotang Li, Jiarui Liu, and Wei Hu]


Overview: This paper introduces “Bam,” a novel training algorithm for neural networks that addresses the issue of low accuracy on rare subgroups, a common problem in standard training methods. Bam operates in two stages: firstly, by amplifying bias through auxiliary variables, and secondly, by reweighting the training dataset based on these amplified biases. This approach improves accuracy for underrepresented groups and offers a new method for training neural networks that minimizes the need for costly group annotations.


Introduction

Imagine this: You’re using an AI image classifier to sort your vacation photos, but it keeps mistaking the background for the main subject. Frustrating, right? This is due to what’s known as ‘spurious correlations’ in machine learning, where models make decisions based on irrelevant features. It’s a widespread issue, affecting everything from image recognition to natural language processing and reinforcement learning. 

Our research tackles this challenge by focusing on group robustness, aiming to enhance accuracy for the worst-off groups in a dataset. These are the groups where the model’s reliance on irrelevant attributes is most misleading. Traditional methods to improve this accuracy involve a costly process of annotating every training example with group information, which is often impractical. We propose a different approach: Bam. Bam amplifies the biases in an initial model to better guide the training of a subsequent, more balanced model. This novel method promises to enhance group robustness without the extensive need for group annotations in training data, a significant step forward in making AI more reliable and fair.

Key Insights

Unveiling “Bam”: A New Solution to Spurious Correlation

The Challenge: Improving Group Robustness

The key to solving this problem lies in enhancing the model performance on group robustness, which means improving its accuracy for the worst-off groups in the dataset. The model’s reliance on irrelevant features is most misleading in these groups. Traditional methods to enhance this accuracy involve annotating every training example with group information, which is often impractical and expensive.

Introducing Bam: A Novel Two-Stage Approach

We propose “Bam” – a novel, two-stage training algorithm to address these challenges. Bam aims to improve group robustness without requiring extensive group annotations in training data. How does it work? Let’s break it down:

Stage One: Bias Amplification

In the first stage, Bam amplifies the inherent biases in the initial training model. This is achieved by introducing trainable auxiliary variables for each training sample. These variables exaggerate the model’s biases, making them more prominent and easier to identify.

Stage Two: Rebalanced Training

In the second stage, we take the outputs from our bias-amplified model and use them to resample our training dataset. This means giving more importance to the misclassified samples due to amplified biases. The model then continues training on this adjusted dataset, gradually learning to focus on the right features and ignore the misleading ones.

The Results: Improved Accuracy and Reduced Need for Annotations

What makes Bam stand out is its ability to improve the worst-off group’s accuracy without relying heavily on group annotations. Our research shows that Bam can achieve competitive performance compared to existing methods in both computer vision and natural language processing applications. Additionally, Bam introduces a simple stopping criterion based on the minimum class accuracy difference, eliminating the need for group annotations with little or no loss in worst-group accuracy.

Empirical Results and Analysis

Our empirical tests of Bam demonstrate its effectiveness in improving group robustness. Evaluated on various standard benchmark datasets for spurious correlations, Bam achieved competitive worst-group accuracy compared to existing methods. Notably, Bam performs robustly across several hyperparameter choices and dataset characteristics.

Additionally, one of the aspects of Bam is its use of class accuracy difference as a stopping criterion, named “ClassDiff.” This approach allows us to potentially eliminate the need for group annotations with little or no loss in worst-group accuracy. The criterion is based on the observation that a low class accuracy difference is strongly correlated with high worst-group accuracy.

Between the lines

As introduced in this research, Bam represents a significant advancement in tackling spurious correlations in deep learning, specifically focusing on improving the worst-group accuracy across various NLP and CV benchmarks. Its innovative approach uses a bias amplification scheme and an auxiliary variable dubbed ‘ClassDiff’ and has shown effectiveness under various experimental settings.

A theoretical analysis of the bias amplification scheme could provide deeper insights into the mechanisms of how deep learning models develop and rely on spurious correlations. Such an analysis would not only enhance our understanding of the behavior of models but could also guide the development of more robust and fair deep learning systems.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

A network diagram with lots of little emojis, organised in clusters.

Tech Futures: AI For and Against Knowledge

A brightly coloured illustration which can be viewed in any direction. It has many elements to it working together: men in suits around a table, someone in a data centre, big hands controlling the scenes and holding a phone, people in a production line. Motifs such as network diagrams and melting emojis are placed throughout the busy vignettes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part II

A rock embedded with intricate circuit board patterns, held delicately by pale hands drawn in a ghostly style. The contrast between the rough, metallic mineral and the sleek, artificial circuit board illustrates the relationship between raw natural resources and modern technological development. The hands evoke human involvement in the extraction and manufacturing processes.

Tech Futures: The Fossil Fuels Playbook for Big Tech: Part I

Close-up of a cat sleeping on a computer keyboard

Tech Futures: The threat of AI-generated code to the world’s digital infrastructure

The undying sun hangs in the sky, as people gather around signal towers, working through their digital devices.

Dreams and Realities in Modi’s AI Impact Summit

related posts

  • The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

    The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks (Research Summa...

  • A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

    A technical study on the feasibility of using proxy methods for algorithmic bias monitoring in a pri...

  • Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

    Research summary: Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelli...

  • Responsible AI In Healthcare

    Responsible AI In Healthcare

  • Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

    Worried But Hopeful: The MAIEI State of AI Ethics Panel Recaps a Difficult Year

  • Participatory Design to build better contact- and proximity-tracing apps

    Participatory Design to build better contact- and proximity-tracing apps

  • Corporate Governance of Artificial Intelligence in the Public Interest

    Corporate Governance of Artificial Intelligence in the Public Interest

  • Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

    Who to Trust, How and Why: Untangling AI Ethics Principles, Trustworthiness and Trust

  • Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

    Risk and Trust Perceptions of the Public of Artificial Intelligence Applications

  • “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

    “Cool Projects” or “Expanding the Efficiency of the Murderous American War Machine?” (Research Summa...

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.