Mitigating Participation Imbalance Bias in Asynchronous Federated Learning
By: Xiangyu Chang , Manyi Yao , Srikanth V. Krishnamurthy and more
Potential Business Impact:
Makes AI learn better from many different computers.
In Asynchronous Federated Learning (AFL), the central server immediately updates the global model with each arriving client's contribution. As a result, clients perform their local training on different model versions, causing information staleness (delay). In federated environments with non-IID local data distributions, this asynchronous pattern amplifies the adverse effect of client heterogeneity (due to different data distribution, local objectives, etc.), as faster clients contribute more frequent updates, biasing the global model. We term this phenomenon heterogeneity amplification. Our work provides a theoretical analysis that maps AFL design choices to their resulting error sources when heterogeneity amplification occurs. Guided by our analysis, we propose ACE (All-Client Engagement AFL), which mitigates participation imbalance through immediate, non-buffered updates that use the latest information available from all clients. We also introduce a delay-aware variant, ACED, to balance client diversity against update staleness. Experiments on different models for different tasks across diverse heterogeneity and delay settings validate our analysis and demonstrate the robust performance of our approaches.
Similar Papers
The Impact Analysis of Delays in Asynchronous Federated Learning with Data Heterogeneity for Edge Intelligence
Machine Learning (CS)
Lets computers learn together even with slow connections.
Stragglers Can Contribute More: Uncertainty-Aware Distillation for Asynchronous Federated Learning
Machine Learning (CS)
Helps AI learn faster from many computers.
FedCure: Mitigating Participation Bias in Semi-Asynchronous Federated Learning with Non-IID Data
Machine Learning (CS)
Makes AI learn better from messy, uneven data.