• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
    • Tech Futures
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 7 (November 2025)
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Unlocking Accuracy and Fairness in Differentially Private Image Classification

December 31, 2023

🔬 Research Summary by Judy Hanwen Shen, a Computer Science Ph.D. student at Stanford University broadly working on algorithmic fairness, differential privacy, and explainability through the lens of data composition.

[Original paper by Leonard Berrada*, Soham De*, Judy Hanwen Shen*, Jamie Hayes, Robert Stanforth, David Stutz, Pushmeet Kohli, Samuel L. Smith, and Borja Balle]


Overview: In high-stakes settings such as health care, machine learning models should uphold both privacy protections for data contributors and fairness across subgroups upon which the models will be deployed. Although prior works have suggested that tradeoffs may exist between accuracy, privacy, and fairness, this paper demonstrates that models fine-tuned with differential privacy can achieve accuracy comparable to that of non-private classifiers. Consequently, we show that privacy-preserving models in this regime do not display greater performance disparities across demographic groups than non-private models. 


Introduction

When seeking medical advice, whether online or in a clinic, individuals outside the majority group may find themselves uncertain about the validity of the information they receive, particularly about their unique identity. The ongoing digitalization of health care presents an opportunity to develop algorithms that yield improved outcomes for marginalized subpopulations. In this context, preserving the confidentiality of one’s health records becomes a critical goal, alongside leveraging the predictive capabilities of models trained on population-level records. Ideally, any machine learning model deployed in a healthcare setting must have accuracy, privacy, and fairness. 

The holy grail of trustworthy machine learning is achieving societally aligned outcomes in conjunction with excellent model performance. In our work, we question previously conceived notions of the accuracy and fairness shortcomings of models trained with differential privacy (DP). We introduce a reliable and accurate method for DP fine-tuning large vision models and show that we can reach the practical performance of previously deployed non-private models. Furthermore, these highly accurate models exhibit disparities across subpopulations, which are no larger than those we observe in non-private models with comparable accuracy. 

Key Insights 

Training highly accurate models with differential privacy

Differential privacy (DP) is the gold standard for training neural networks while preserving the privacy of individual data. Indeed, this technique guarantees that the influence of any single training data point remains limited and obfuscated when training the model. However, due to the noise employed for the obfuscation, this privacy protection can come at the cost of model accuracy, particularly in modern settings where model parameters are high dimensional.  This questions whether privacy protections can be justified at the cost of accuracy in safety-critical domains such as health care. 

Our work introduces practical techniques to close the accuracy gap between private and non-private models on image classification tasks. These techniques include parameter averaging to improve model convergence and using model families without batch-normalization. Our results demonstrate that using publicly available datasets such as ImageNet for pre-training and then privacy-preserving methods for fine-tuning yields private chest X-ray classifiers that closely match non-private models for AUC. 

When differential privacy does not necessarily imply worsened disparities 

Another challenge of deploying differentially private models is the potential subgroup disparities that may be introduced by private training. For example, some subgroups defined by class labels or sensitive attributes may experience greater accuracy deterioration than others under private training. In contrast, our work finds that models trained with differential privacy, both fine-tuned and trained from scratch, exhibit similar group accuracy disparities to non-private models at the same accuracy. We first highlight the necessity of evaluating disparities using averaged weights to overcome the higher noise level in models trained with DP-SGD. Secondly, AUC on chest-x-ray classification is not systematically worse for private than for non-private models. Tradeoffs between subgroup outcomes and differential privacy can be mitigated by training more accurate models for the important datasets we examine. 

Between the lines

Differential privacy is often considered a non-practical technology for model training due to its perceived impact on accuracy and fairness. Our findings show that it is sometimes possible to achieve very good accuracy, fairness, and privacy simultaneously. While the repercussions of overlooking fairness and privacy may not be immediately evident on common academic benchmarks, such considerations are absolutely essential when training and deploying models on real-world data. 

The creation of AI assistive technology that is aligned with human values necessitates a thorough examination of the diverse and often intricate desiderata specific to each use case. While our work specifically investigates the alignment of X-ray classification with privacy and fairness, identifying which values to prioritize across various other practical problems is a ripe area for future research.

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

Illustration of a coral reef ecosystem

Tech Futures: Diversity of Thought and Experience: The UN’s Scientific Panel on AI

This image shows a large white, traditional, old building. The top half of the building represents the humanities (which is symbolised by the embedded text from classic literature which is faintly shown ontop the building). The bottom section of the building is embossed with mathematical formulas to represent the sciences. The middle layer of the image is heavily pixelated. On the steps at the front of the building there is a group of scholars, wearing formal suits and tie attire, who are standing around at the enternace talking and some of them are sitting on the steps. There are two stone, statute-like hands that are stretching the building apart from the left side. In the forefront of the image, there are 8 students - which can only be seen from the back. Their graduation gowns have bright blue hoods and they all look as though they are walking towards the old building which is in the background at a distance. There are a mix of students in the foreground.

Tech Futures: Co-opting Research and Education

Agentic AI systems and algorithmic accountability: a new era of e-commerce

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

related posts

  • Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

    Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

  • Submission to World Intellectual Property Organization on IP & AI

    Submission to World Intellectual Property Organization on IP & AI

  • Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

    Who will share Fake-News on Twitter? Psycholinguistic cues in online post histories discriminate bet...

  • Battle of Biometrics: The use and issues of facial recognition in Canada

    Battle of Biometrics: The use and issues of facial recognition in Canada

  • Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

    Responsible Generative AI: A Reference Architecture for Designing Foundation Model-based Agents

  • You cannot have AI ethics without ethics

    You cannot have AI ethics without ethics

  • Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

    Blending Brushstrokes with Bytes: My Artistic Odyssey from Analog to AI

  • Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

    Choices, Risks, and Reward Reports: Charting Public Policy for Reinforcement Learning Systems

  • When Algorithms Infer Pregnancy or Other Sensitive Information About People

    When Algorithms Infer Pregnancy or Other Sensitive Information About People

  • How Culturally Aligned are Large Language Models?

    How Culturally Aligned are Large Language Models?

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.