🔬 Research Summary by Hongyan Chang, a sixth-year Ph.D. student at the National University of Singapore, focuses on algorithmic fairness and privacy, particularly their intersection, and is also invested in advancing trustworthy federated learning.
[Original paper by Hongyan Chang and Reza Shokri]
Overview: This paper investigates the propagation of bias in federated learning, demonstrating that the participation of a few biased clients can significantly compromise group fairness by encoding their bias within the model parameters. The study reveals that the resultant bias in federated learning is notably higher than that observed in centralized training, emphasizing that the bias is intrinsic to the algorithm itself. The findings underline the critical necessity for evaluating group fairness in federated learning and developing robust algorithms against bias propagation.
Introduction
Fairness in machine learning models is a hotly debated topic within and beyond academic papers in a world increasingly reliant on data-driven decisions. Picture this: you possess local data to train a model, but limited access to information from minority groups could result in a biased model, making unfair predictions. Federated Learning (FL), a technique allowing clients to collectively learn a global model while keeping all the training data localized, seems promising for enhancing model fairness due to access to diverse datasets. But does exposure to broader data under FL actually mitigate inherent biases?
Our investigation delves into this question using the naturally partitioned US Census dataset. Surprisingly, we unearth a paradox: instead of reducing bias, FL can exacerbate fairness issues, particularly for clients with unbiased or less biased datasets (more data from the minority group). This phenomenon, which we term bias propagation, occurs in FL when bias from more biased datasets is propagated to less biased ones during collaborative learning, worsening fairness issues. Furthermore, we demonstrate that this effect is achieved by embedding the bias within a few specific model parameters. This alteration in the model’s parameters influences all clients during the model aggregation process in FL, thereby propagating the bias to all clients.
Consequently, our findings underscore a critical caveat in the realm of machine learning fairness: expanded access to diverse data sets through Federated Learning (FL) does not inherently lead to the reduction of bias. Therefore, there is an urgent need to design FL systems that are fair and robust against bias propagation.
Key Insights
Impact of Federated Learning on Fairness
Federated learning (FL) provides a promising solution by enabling clients to learn a global model collaboratively without sharing their data. In each round of FL, clients share their local model updates computed on their private datasets with a global server that aggregates them to update the global model. Despite the widespread adoption of FL in various applications such as healthcare, recruitment, and loan evaluation, it is not yet fully understood how FL algorithms could magnify bias in training datasets.
We note that in practice, clients often have heterogeneous data distributions. Evaluating the model’s bias concerning the global distribution does not accurately reflect the fairness of the FL model for clients’ local data distributions, which are relevant to end-users. This is the critical problem that we address in this paper. Specifically, we investigate the following questions:
- How does participating in FL affect the bias and fairness of the resulting models compared to models that are trained in a standalone setting?
- Does FL provide clients with the potential fairness benefits of centralized training on the union of their data?
- Can clients with biased datasets negatively impact the experienced fairness of other clients on their local distributions?
- How and why does the bias of a small number of clients affect the entire network?
Methodology:
Defining Bias
In the complex landscape of fairness, the concept is multifaceted. We concentrate on group fairness, which demands similar model performance across different groups distinguished by sensitive traits (e.g., gender). Evaluating discrimination quantitatively based on group fairness metrics is becoming standard. Therefore, we emphasize two prevalent notions: equalized odds and demographic parity. We quantify the discrepancy in these fairness metrics as the “fairness gap.”
The Collaboration’s Consequences
Federated Learning (FL) seeks to surpass the efficacy of individual training and match the caliber of centralized approaches. We assess FL’s fairness implications using centralized and standalone training as benchmarks. In standalone training, clients independently optimize models on their data, inherently contributing to their fairness gap symbolizing their bias level. In contrast, centralized training amalgamates all datasets, creating a universal model. We evaluate the “benefit” in fairness and accuracy improvements achievable through FL and centralized methods.
Findings
With this setup, here are our findings.
- Federated Learning’s Paradoxical Effect on Fairness: Collaboration through FL doesn’t consistently lead to fairness improvements; it sometimes intensifies fairness issues. Though centralized training tends to enhance both accuracy and fairness, FL doesn’t always mirror these benefits, indicating an intrinsic propensity of FL algorithms to infuse additional bias, contrary to conventional training methods.
- Inequitable Fairness Impacts Among Clients: FL’s influence on fairness is not even. It can bolster fairness for clients with higher initial bias but deteriorate it for those starting with lower bias, indicating a disproportionate fairness distribution during FL.
- The Disagreement between Local Updates and Aggregation: Examining local updates and their aggregation reveals a paradox. Local updates from less biased clients tend to enhance fairness, but this improvement is negated during the aggregation phase. Conversely, more biased clients’ updates aggravate fairness issues, partially mitigated by aggregation. This pattern underscores the aggregation’s role in the impacts of unequal fairness, inadvertently favoring more biased entities.
- Underlying Causes of Fairness Disparities: Surprisingly, substantial fairness gaps in the FL model aren’t primarily due to obvious attribute distribution differences among groups. Instead, they stem from models treating protected groups differently, leading to pronounced biases.
- Bias Manifestation in Model Parameters: Biased clients tend to enhance model reliance on sensitive attributes, a trend enduring throughout training. This heightened sensitivity is not random but is encoded within specific model parameters. Certain parameters, critical for extracting sensitive attribute data, become more prominent in biased clients, skewing the model’s predictions. This bias is not isolated; it proliferates through the network during aggregation, implicating even the least biased in the deterioration of fairness.
Between the lines
This paper provides an insightful analysis of the biases that Federated Learning (FL), a prevalent machine learning approach, can inadvertently amplify, particularly concerning how these biases unevenly impact participating entities. The study reveals an alarming tendency of FL to exacerbate existing biases during local updates, encoding them into the global model through changing specific parameters, thereby affecting all clients via parameter aggregation.
However, the research primarily concentrates on instances where the model explicitly utilizes sensitive characteristics such as race or gender. This observation uncovers critical avenues for future inquiry: How might we identify and mitigate biases in model parameters when sensitive attributes are not overtly used? How can we engineer FL methodologies that are robust to these subtle forms of bias propagation? These queries are crucial as FL’s application expands, particularly in sectors with significant personal consequences. Addressing these gaps is essential for promoting fairness within collaborative learning systems.