🔬 Research Summary by Changhun Jo, Jy-yong Sohn, Kangwook Lee.
Changhun Jo is a PhD candidate at University of Wisconsin-Madison, working on algorithmic fairness, social recommender systems, and machine learning.
Jy-yong Sohn is a post-doctoral researcher at University of Wisconsin-Madison, and he is interested in the intersection of machine learning, information theory, and distributed algorithms.
Kangwook Lee is an Assistant Professor at University of Wisconsin-Madison, and his research interests lie in machine learning, deep learning, information theory, and coding.
[Original paper by Changhun Jo, Jy-yong Sohn, Kangwook Lee]
Overview: Fair classification has been widely studied since current machine learning (ML) systems tend to make biased decisions across different groups, e.g., race, gender, and age. We especially focus on fair classification under corrupted data, motivated by the fact that the web-scale data we use for training models is potentially poisoned by random or adversarial noise. This paper studies the minimum amount of data corruption required for a successful attack against a fairness-aware learner.
Introduction
It has been observed that current ML systems are making unfair decisions for different groups of people depending on their race, gender, age, etc. For example, according to the experimental results reported by AlgorithmWatch (a non-profit research organization) in 2020, a thermometer held by a dark-skinned hand was labeled as a gun by Google Vision Cloud, while the same thermometer held by a light-skinned hand was labeled as a monocular. This motivated various attempts to develop fair classifiers in recent years. Meanwhile, various ML pipelines use crowdsourced training data, potentially containing random or adversarial noise. Thus, it is important to study the effect of data poisoning on ML frameworks.
In this work, we ask the following question: “What is the minimum amount of data poisoning needed to let a fairness-aware learner output a predefined target classifier?” Focusing on the setting where the attacker can manipulate the labels and sensitive attributes (race, gender, etc.) of data samples, we provide information-theoretic analysis on the minimum amount of data poisoning required for the classifier learned on the poisoned data distribution, becoming a target classifier. Inspired by this theoretic result, we also propose a computationally efficient data poisoning attack algorithm that can compromise the performance of a fairness-aware learner.
Key Insights
Problem setup
Fairness-aware learner
We consider a learner that finds a classifier that achieves the minimum risk among fair classifiers. We use a popular fairness notion called equal opportunity, which aims at equalizing the true positive rates for different groups.
Attacker
The attacker’s goal is to make the learner output the target classifier, by using a minimum amount of data perturbation. The attacker can change (or flip, since we consider a binary setup) labels and sensitive attributes. This attack is dubbed a “flipping attack” in our paper.
Information-Theoretic Results on Minimum Perturbation
We find the lower/upper bounds of the minimum amount of data perturbation required for a successful flipping attack. The lower bound can be obtained by finding the minimum amount of data perturbation required for making the target classifier look perfectly fair on the poisoned distribution. The upper bound can be obtained by constructing an explicit successful data poisoning scheme via a two-stage attack algorithm. In the first stage, the attacker flips labels to get an intermediate distribution on which the target classifier achieves the minimum risk. In the second stage, the attacker flips sensitive attributes to find a distribution on which the target classifier looks perfectly fair. Interestingly, these bounds are tight when the target classifier is the risk minimizer.
Sensitive Attribute Flipping Attack Algorithm
Inspired by our theoretical finding, we propose a computationally efficient data poisoning attack algorithm. Specifically, we set the empirical risk minimizer, which is generally unfair, as a target classifier and apply the empirical counterpart of the second stage of our two-stage attack algorithm. Note that this attack algorithm flips only sensitive attributes without perturbing labels. We prove that this attack algorithm makes the empirical risk minimizer look almost fair on the poisoned distribution. Therefore, our attack algorithm increases the chance of the empirical risk minimizer being found by the fair learner, thereby degrading the fairness of the learning algorithm.
Experimental Results
For the convenience of presentation, we use Y and Z to denote the label and sensitive attribute, respectively. We compare our sensitive attribute flipping algorithm with four data poisoning attack algorithms:
(1) random Y-flip attack chooses random samples from the training set and flips Y values
(2) random Z-flip attack chooses random samples and flips Z values
(3) random Y&Z-flip attack chooses random samples and flips both Y and Z values
(4) adversarial sampling (AS) attack chooses adversarial samples from the feasible attack set using the online gradient descent algorithm proposed in [1] and adds them to the training set.
We evaluate these attacks against three fair learning algorithms:
(1) fair classification using fairness constraints (FC) [2]
(2) fair training against adversarial perturbations (Err-Tol) [3]
(3) fair and robust training (FR-Train) [4] given a clean validation set.
[Table 1. Comparison with other baseline attack algorithms]
Shown in Table 1 is the performance of attack algorithms against fair learning algorithms on a synthetic dataset. When the learner runs fair learning algorithms on the uncorrupted dataset, the unfairness significantly decreases at the cost of degraded accuracy, exhibiting a well-known tradeoff between accuracy and fairness. One can observe that by only manipulating 3.2% of data, our attack makes the output significantly unfair, outperforming (or achieving comparable attack performances with) other attack baselines. Interestingly, our attack successfully degrades the fairness of robust, fair training algorithms; Err-Tol and FR-Train.
Between the lines
The theoretical results in our paper provide insights into how much fairness-aware learners are robust against data poisoning attacks and what is the optimal way of fooling fairness-aware learners. According to our empirical results, manipulating only 3% of data is enough to let a fairness-aware learner output a significantly unfair model. This implies that the system designer should be careful about acquiring clean data if we want to guarantee that the learner outputs a fair classifier.
Our paper opens up interesting research questions: (1) tightening the lower/upper bounds for general target classifier, (2) extending the result to cases when labels and sensitive attributes are non-binary, (3) extending our attack algorithm into the federated learning setup where the effect of data poisoning in one adversarial device is degraded by the process of aggregating models from multiple honest devices.
References
[1] Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri. On adversarial bias and the robustness of fair machine learning. arXiv:2006.08669, 2020.
[2] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. Fairness Constraints: Mechanisms for Fair Classification. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2017.
[3] L. Elisa Celis, Anay Mehrotra, and Nisheeth K Vishnoih. Fair classification with adversarial perturbations. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
[4] Yuji Roh, Kangwook Lee, Steven Whang, and Changho Suh. FR-train: A mutual information-based approach to fair and robust training. In International Conference on Machine Learning (ICML), 2020.