Toward Fair Federated Learning under Demographic Disparities and Data Imbalance
By: Qiming Wu , Siqi Li , Doudou Zhou and more
Potential Business Impact:
Makes AI fairer for everyone in medicine.
Ensuring fairness is critical when applying artificial intelligence to high-stakes domains such as healthcare, where predictive models trained on imbalanced and demographically skewed data risk exacerbating existing disparities. Federated learning (FL) enables privacy-preserving collaboration across institutions, but remains vulnerable to both algorithmic bias and subgroup imbalance - particularly when multiple sensitive attributes intersect. We propose FedIDA (Fed erated Learning for Imbalance and D isparity A wareness), a framework-agnostic method that combines fairness-aware regularization with group-conditional oversampling. FedIDA supports multiple sensitive attributes and heterogeneous data distributions without altering the convergence behavior of the underlying FL algorithm. We provide theoretical analysis establishing fairness improvement bounds using Lipschitz continuity and concentration inequalities, and show that FedIDA reduces the variance of fairness metrics across test sets. Empirical results on both benchmark and real-world clinical datasets confirm that FedIDA consistently improves fairness while maintaining competitive predictive performance, demonstrating its effectiveness for equitable and privacy-preserving modeling in healthcare. The source code is available on GitHub.
Similar Papers
AFed: Algorithmic Fair Federated Learning
Machine Learning (CS)
Makes AI fair for everyone, even with private data.
FedDiverse: Tackling Data Heterogeneity in Federated Learning with Diversity-Driven Client Selection
Machine Learning (CS)
Helps AI learn better from different data.
pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning
Machine Learning (CS)
Makes AI fair for everyone, not just some.