• Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

  • Articles
    • Public Policy
    • Privacy & Security
    • Human Rights
      • Ethics
      • JEDI (Justice, Equity, Diversity, Inclusion
    • Climate
    • Design
      • Emerging Technology
    • Application & Adoption
      • Health
      • Education
      • Government
        • Military
        • Public Works
      • Labour
    • Arts & Culture
      • Film & TV
      • Music
      • Pop Culture
      • Digital Art
  • Columns
    • AI Policy Corner
    • Recess
  • The AI Ethics Brief
  • AI Literacy
    • Research Summaries
    • AI Ethics Living Dictionary
    • Learning Community
  • The State of AI Ethics Report
    • Volume 6 (February 2022)
    • Volume 5 (July 2021)
    • Volume 4 (April 2021)
    • Volume 3 (Jan 2021)
    • Volume 2 (Oct 2020)
    • Volume 1 (June 2020)
  • About
    • Our Contributions Policy
    • Our Open Access Policy
    • Contact
    • Donate

Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

November 30, 2020

Summary contributed by our Artist-in-Residence Falaah Arif Khan. She’s also a Research Fellow in the CVIT Lab at the International Institute of Information Technology.

Link to original paper + authors at the bottom.


Overview: This paper unifies two seemingly disparate research directions in Machine Learning (ML), namely Domain Generalization and Fair Machine Learning, under one common goal of “learning algorithms robust to changes across domains or population groups”. It draws links between several popular methods in Domain Generalization and Fair-ML literature and forges a new exciting research area at the intersection of the two.


Both algorithmic fairness and domain generalization share the common objective of reducing sensitivity to the training distribution. In algorithmic fairness, we wish to make classifications that are ‘Fair’ as per our context-specific notion of Fairness, such that we do not disadvantage individuals due to their membership in a certain group (based on sensitive features such as race, gender, etc). In Domain Generalization, we wish to learn features that are ‘domain-invariant’ such that the classifier’s predictions are made based on object information rather than stylistic information, such as color, which might vary across data domains. This exciting line of work attempts to ‘take the best of both worlds’ and share insights and methods across Fair-ML and Domain Generalization to design algorithms that, within their specific context, are both robust as well as “fair”. Group membership can be treated as Domain-specific attributes and so the popular conception of ‘Fairness through Blindness’, which removes all sensitive attributes (such as gender, race, etc) from consideration, has a natural connection to the ‘Domain Invariant’ features that are learned in Domain Generalization tasks.

The authors provide a succinct review of both Domain Generalization and Fair-ML literature, including Distributionally Robust Optimization (DRO), Invariant Learning and Invariant Risk Minimization (IRM) and Risk Extrapolation algorithms and Fairness notions mapping to demographic parity, equal opportunity, calibration, group sufficiency and multi-calibration. They also attempt to map some common objectives across the two areas such that lessons from one can be used to inform the other. Intuitively, group membership can be thought of as domain information. In Fair-ML literature, group membership is based on some protected attribute such as gender, race, etc. In domain generalization, the target domain is a mixture of multiple domains, all of which may or may not be available during training. 

In Domain Generalization, we wish the algorithm to learn properties that will generalize well to the test distribution. In algorithmic fairness, on the other hand, our learning objective is dictated by the worldview we employ and the context-specific fairness notion we wish to satisfy.

The authors draw from fairness approaches that optimize worst-case performance without access to demographic information and formulate an algorithm to learn domain-invariance, without access to environment information. They consider a realistic scenario when the train-test partitioning of domains is not provided, since in the real-world domains will have overlap and a clean split of different domains that are present in the testing environment is practically infeasible. The proposed method, Environment Inference for Invariant Learning (EIIL), is a variant of Invariant Risk Training (IRT), where the former takes hand-crafted environments whereas EIIL learns suitable partitions that would lead to worst-case performance. Performing IRT on such partitions thereby provides a good generalization.

The authors demonstrate the robustness of EIIL without requiring a priori knowledge of the environments through experiments on the Color-MNIST dataset and further enumerate how EIIL directly optimizes the common fairness criterion of group sufficiency, without knowledge of sensitive groups, on the UCI Adult dataset.

The authors also demonstrate the sensitivity of EIIL to the choice of reference representation and empirically show that the algorithm discovers suitable worst-case partitions only when the reference representation encodes the incorrect inductive bias by focusing on spurious features, thereby calling out the limited, setting-specific effectiveness of EIIL over standard Empirical Risk Minimization approaches. 

They also propose some interesting future directions where methods in domain generalization can be applied for creating “fair” outcomes, such as the scenario where distribution shift occurs due to the correction of some societal harm.

The paper puts forth an extremely exciting research direction that seems to emerge naturally from the shared objective between generalizing to an unseen domain and in trying to fulfill a specific notion of fairness. They adeptly show how ideas from ‘Fairness through Blindness’ can be helpful to learn domain invariance and this motivates a deeper, more critical look at how two seemingly disparate sub-fields of machine learning can inform and even bolster the capabilities of one another.


Original paper by Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel: https://arxiv.org/abs/2010.07249

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

🔍 SEARCH

Spotlight

ALL IN Conference 2025: Four Key Takeaways from Montreal

Beyond Dependency: The Hidden Risk of Social Comparison in Chatbot Companionship

AI Policy Corner: Restriction vs. Regulation: Comparing State Approaches to AI Mental Health Legislation

Beyond Consultation: Building Inclusive AI Governance for Canada’s Democratic Future

AI Policy Corner: U.S. Executive Order on Advancing AI Education for American Youth

related posts

  • Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

    Research summary: What’s Next for AI Ethics, Policy, and Governance? A Global Overview

  • Towards Climate Awareness in NLP Research

    Towards Climate Awareness in NLP Research

  • Machines as teammates: A research agenda on AI in team collaboration

    Machines as teammates: A research agenda on AI in team collaboration

  • Fair and explainable machine learning under current legal frameworks

    Fair and explainable machine learning under current legal frameworks

  • From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

    From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

  • Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

    Bridging the Gap: The Case For an ‘Incompletely Theorized Agreement’ on AI Policy (Research Summary)

  • Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

    Rethinking normative status necessary for self-determination in the era of sentient artificial agent...

  • Best humans still outperform artificial intelligence in a creative divergent thinking task

    Best humans still outperform artificial intelligence in a creative divergent thinking task

  • How to Help People Understand AI

    How to Help People Understand AI

  • Responsibility assignment won’t solve the moral issues of artificial intelligence

    Responsibility assignment won’t solve the moral issues of artificial intelligence

Partners

  •  
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer


Articles

Columns

AI Literacy

The State of AI Ethics Report


 

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.

Contact

Donate


  • © 2025 MONTREAL AI ETHICS INSTITUTE.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.