• Skip to primary navigation
  • Skip to main content
  • LinkedIn
  • RSS
  • Twitter
Montreal AI Ethics Institute

Montreal AI Ethics Institute

Democratizing AI ethics literacy.

  • Content
    • The State of AI Ethics
    • The AI Ethics Brief
    • The Living Dictionary
    • Research Summaries
    • Columns
      • Social Context in LLM Research: the BigScience Approach
      • Recess
      • Like Talking to a Person
      • Sociology of AI Ethics
      • The New Heartbeat of Healthcare
      • Office Hours
      • Permission to Be Uncertain
      • AI Application Spotlight
      • Ethical AI Startups
    • Publications
  • Community
    • Events
    • Learning Community
    • Code of Conduct
  • Team
  • Donate
  • About
    • Our Open Access Policy
    • Our Contributions Policy
    • Press
  • Contact
  • đŸ‡«đŸ‡·
Subscribe

Exchanging Lessons Between Algorithmic Fairness and Domain Generalization (Research Summary)

November 30, 2020 by MAIEI

Summary contributed by our Artist-in-Residence Falaah Arif Khan. She’s also a Research Fellow in the CVIT Lab at the International Institute of Information Technology.

Link to original paper + authors at the bottom.


Overview: This paper unifies two seemingly disparate research directions in Machine Learning (ML), namely Domain Generalization and Fair Machine Learning, under one common goal of “learning algorithms robust to changes across domains or population groups”. It draws links between several popular methods in Domain Generalization and Fair-ML literature and forges a new exciting research area at the intersection of the two.


Both algorithmic fairness and domain generalization share the common objective of reducing sensitivity to the training distribution. In algorithmic fairness, we wish to make classifications that are ‘Fair’ as per our context-specific notion of Fairness, such that we do not disadvantage individuals due to their membership in a certain group (based on sensitive features such as race, gender, etc). In Domain Generalization, we wish to learn features that are ‘domain-invariant’ such that the classifier’s predictions are made based on object information rather than stylistic information, such as color, which might vary across data domains. This exciting line of work attempts to ‘take the best of both worlds’ and share insights and methods across Fair-ML and Domain Generalization to design algorithms that, within their specific context, are both robust as well as “fair”. Group membership can be treated as Domain-specific attributes and so the popular conception of ‘Fairness through Blindness’, which removes all sensitive attributes (such as gender, race, etc) from consideration, has a natural connection to the ‘Domain Invariant’ features that are learned in Domain Generalization tasks.

The authors provide a succinct review of both Domain Generalization and Fair-ML literature, including Distributionally Robust Optimization (DRO), Invariant Learning and Invariant Risk Minimization (IRM) and Risk Extrapolation algorithms and Fairness notions mapping to demographic parity, equal opportunity, calibration, group sufficiency and multi-calibration. They also attempt to map some common objectives across the two areas such that lessons from one can be used to inform the other. Intuitively, group membership can be thought of as domain information. In Fair-ML literature, group membership is based on some protected attribute such as gender, race, etc. In domain generalization, the target domain is a mixture of multiple domains, all of which may or may not be available during training. 

In Domain Generalization, we wish the algorithm to learn properties that will generalize well to the test distribution. In algorithmic fairness, on the other hand, our learning objective is dictated by the worldview we employ and the context-specific fairness notion we wish to satisfy.

The authors draw from fairness approaches that optimize worst-case performance without access to demographic information and formulate an algorithm to learn domain-invariance, without access to environment information. They consider a realistic scenario when the train-test partitioning of domains is not provided, since in the real-world domains will have overlap and a clean split of different domains that are present in the testing environment is practically infeasible. The proposed method, Environment Inference for Invariant Learning (EIIL), is a variant of Invariant Risk Training (IRT), where the former takes hand-crafted environments whereas EIIL learns suitable partitions that would lead to worst-case performance. Performing IRT on such partitions thereby provides a good generalization.

The authors demonstrate the robustness of EIIL without requiring a priori knowledge of the environments through experiments on the Color-MNIST dataset and further enumerate how EIIL directly optimizes the common fairness criterion of group sufficiency, without knowledge of sensitive groups, on the UCI Adult dataset.

The authors also demonstrate the sensitivity of EIIL to the choice of reference representation and empirically show that the algorithm discovers suitable worst-case partitions only when the reference representation encodes the incorrect inductive bias by focusing on spurious features, thereby calling out the limited, setting-specific effectiveness of EIIL over standard Empirical Risk Minimization approaches. 

They also propose some interesting future directions where methods in domain generalization can be applied for creating “fair” outcomes, such as the scenario where distribution shift occurs due to the correction of some societal harm.

The paper puts forth an extremely exciting research direction that seems to emerge naturally from the shared objective between generalizing to an unseen domain and in trying to fulfill a specific notion of fairness. They adeptly show how ideas from ‘Fairness through Blindness’ can be helpful to learn domain invariance and this motivates a deeper, more critical look at how two seemingly disparate sub-fields of machine learning can inform and even bolster the capabilities of one another.


Original paper by Elliot Creager, Jörn-Henrik Jacobsen, Richard Zemel: https://arxiv.org/abs/2010.07249

Category iconResearch Summaries

Want quick summaries of the latest research & reporting in AI ethics delivered to your inbox? Subscribe to the AI Ethics Brief. We write every week.
  • LinkedIn
  • RSS
  • Twitter
  • © MONTREAL AI ETHICS INSTITUTE. All rights reserved 2021.
  • This work is licensed under a Creative Commons Attribution 4.0 International License.
  • Creative Commons LicenseLearn more about our open access policy here.