• Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Core Principles of Responsible AI
    • Accountability
    • Fairness
    • Privacy
    • Safety and Security
    • Sustainability
    • Transparency
  • Special Topics
    • AI in Industry
    • Ethical Implications
    • Human-Centered Design
    • Regulatory Landscape
    • Technical Methods
  • Living Dictionary
  • State of AI Ethics
  • AI Ethics Brief
  • šŸ‡«šŸ‡·
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy

Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

November 30, 2020

Summary contributed by our Artist-in-Residence Falaah Arif Khan. She’s also a Research Fellow in the CVIT Lab at the International Institute of Information Technology.

Link to original paper + authors at the bottom.


Overview: This paper unifies two seemingly disparate research directions in Machine Learning (ML), namely Domain Generalization and Fair Machine Learning, under one common goal of ā€œlearning algorithms robust to changes across domains or population groupsā€. It draws links between several popular methods in Domain Generalization and Fair-ML literature and forges a new exciting research area at the intersection of the two.


Both algorithmic fairness and domain generalization share the common objective of reducing sensitivity to the training distribution. In algorithmic fairness, we wish to make classifications that are ā€˜Fair’ as per our context-specific notion of Fairness, such that we do not disadvantage individuals due to their membership in a certain group (based on sensitive features such as race, gender, etc). In Domain Generalization, we wish to learn features that are ā€˜domain-invariant’ such that the classifier’s predictions are made based on object information rather than stylistic information, such as color, which might vary across data domains. This exciting line of work attempts to ā€˜take the best of both worlds’ and share insights and methods across Fair-ML and Domain Generalization to design algorithms that, within their specific context, are both robust as well as ā€œfairā€. Group membership can be treated as Domain-specific attributes and so the popular conception of ā€˜Fairness through Blindness’, which removes all sensitive attributes (such as gender, race, etc) from consideration, has a natural connection to the ā€˜Domain Invariant’ features that are learned in Domain Generalization tasks.

The authors provide a succinct review of both Domain Generalization and Fair-ML literature, including Distributionally Robust Optimization (DRO), Invariant Learning and Invariant Risk Minimization (IRM) and Risk Extrapolation algorithms and Fairness notions mapping to demographic parity, equal opportunity, calibration, group sufficiency and multi-calibration. They also attempt to map some common objectives across the two areas such that lessons from one can be used to inform the other. Intuitively, group membership can be thought of as domain information. In Fair-ML literature, group membership is based on some protected attribute such as gender, race, etc. In domain generalization, the target domain is a mixture of multiple domains, all of which may or may not be available during training. 

In Domain Generalization, we wish the algorithm to learn properties that will generalize well to the test distribution. In algorithmic fairness, on the other hand, our learning objective is dictated by the worldview we employ and the context-specific fairness notion we wish to satisfy.

The authors draw from fairness approaches that optimize worst-case performance without access to demographic information and formulate an algorithm to learn domain-invariance, without access to environment information. They consider a realistic scenario when the train-test partitioning of domains is not provided, since in the real-world domains will have overlap and a clean split of different domains that are present in the testing environment is practically infeasible. The proposed method, Environment Inference for Invariant Learning (EIIL), is a variant of Invariant Risk Training (IRT), where the former takes hand-crafted environments whereas EIIL learns suitable partitions that would lead to worst-case performance. Performing IRT on such partitions thereby provides a good generalization.

The authors demonstrate the robustness of EIIL without requiring a priori knowledge of the environments through experiments on the Color-MNIST dataset and further enumerate how EIIL directly optimizes the common fairness criterion of group sufficiency, without knowledge of sensitive groups, on the UCI Adult dataset.

The authors also demonstrate the sensitivity of EIIL to the choice of reference representation and empirically show that the algorithm discovers suitable worst-case partitions only when the reference representation encodes the incorrect inductive bias by focusing on spurious features, thereby calling out the limited, setting-specific effectiveness of EIIL over standard Empirical Risk Minimization approaches. 

They also propose some interesting future directions where methods in domain generalization can be applied for creating ā€œfairā€ outcomes, such as the scenario where distribution shift occurs due to the correction of some societal harm.

The paper puts forth an extremely exciting research direction that seems to emerge naturally from the shared objective between generalizing to an unseen domain and in trying to fulfill a specific notion of fairness. They adeptly show how ideas from ā€˜Fairness through Blindness’ can be helpful to learn domain invariance and this motivates a deeper, more critical look at how two seemingly disparate sub-fields of machine learning can inform and even bolster the capabilities of one another.


Original paper by Elliot Creager, Jƶrn-Henrik Jacobsen, Richard Zemel: https://arxiv.org/abs/2010.07249

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We publish bi-weekly.

Primary Sidebar

šŸ” SEARCH

Spotlight

AI Policy Corner: Frontier AI Safety Commitments, AI Seoul Summit 2024

AI Policy Corner: The Colorado State Deepfakes Act

Special Edition: Honouring the Legacy of Abhishek Gupta (1992–2024)

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

From Funding Crisis to AI Misuse: Critical Digital Rights Challenges from RightsCon 2025

related posts

  • From AI Winter to AI Hype: The Story of AI in Montreal

    From AI Winter to AI Hype: The Story of AI in Montreal

  • Animism, Rinri, Modernization; the Base of Japanese Robotics

    Animism, Rinri, Modernization; the Base of Japanese Robotics

  • The Short Anthropological Guide to the Study of Ethical AI

    The Short Anthropological Guide to the Study of Ethical AI

  • Research summary: Social Work Thinking for UX and AI Design

    Research summary: Social Work Thinking for UX and AI Design

  • The Proliferation of AI Ethics Principles: What's Next?

    The Proliferation of AI Ethics Principles: What's Next?

  • Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

    Research summary: On the Edge of Tomorrow: Canada’s AI Augmented Workforce

  • The Watsons Meet Watson: A Call for Carative AI

    The Watsons Meet Watson: A Call for Carative AI

  • The State of AI Ethics Report (Volume 6)

    The State of AI Ethics Report (Volume 6)

  • Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

    Beyond Bias and Discrimination: Redefining the AI Ethics Principle of Fairness in Healthcare Machine...

  • On the sui generis value capture of new digital technologies: The case of AI

    On the sui generis value capture of new digital technologies: The case of AI

Partners

  • Ā 
    U.S. Artificial Intelligence Safety Institute Consortium (AISIC) at NIST

  • Partnership on AI

  • The LF AI & Data Foundation

  • The AI Alliance

Footer

Categories


• Blog
• Research Summaries
• Columns
• Core Principles of Responsible AI
• Special Topics

Signature Content


• The State Of AI Ethics

• The Living Dictionary

• The AI Ethics Brief

Learn More


• About

• Open Access Policy

• Contributions Policy

• Editorial Stance on AI Tools

• Press

• Donate

• Contact

The AI Ethics Brief (bi-weekly newsletter)

About Us


Founded in 2018, the Montreal AI Ethics Institute (MAIEI) is an international non-profit organization equipping citizens concerned about artificial intelligence and its impact on society to take action.


Archive

  • Ā© MONTREAL AI ETHICS INSTITUTE. All rights reserved 2024.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Learn more about our open access policy here.
  • Creative Commons License

    Save hours of work and stay on top of Responsible AI research and reporting with our bi-weekly email newsletter.