pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning
By: Haoyu Lei , Shizhan Gong , Qi Dou and more
Potential Business Impact:
Makes AI fair for everyone, not just some.
Federated learning (FL) algorithms commonly aim to maximize clients' accuracy by training a model on their collective data. However, in several FL applications, the model's decisions should meet a group fairness constraint to be independent of sensitive attributes such as gender or race. While such group fairness constraints can be incorporated into the objective function of the FL optimization problem, in this work, we show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions. To achieve an optimal accuracy-group fairness trade-off, we propose the Personalized Federated Learning for Client-Level Group Fairness (pFedFair) framework, where clients locally impose their fairness constraints over the distributed training process. Leveraging the image embedding models, we extend the application of pFedFair to computer vision settings, where we numerically show that pFedFair achieves an optimal group fairness-accuracy trade-off in heterogeneous FL settings. We present the results of several numerical experiments on benchmark and synthetic datasets, which highlight the suboptimality of non-personalized FL algorithms and the improvements made by the pFedFair method.
Similar Papers
FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning
Machine Learning (CS)
Makes AI fair for everyone, not just some.
Fairness-Constrained Optimization Attack in Federated Learning
Machine Learning (CS)
Makes AI unfairly biased, even when it seems accurate.
FedPref: Federated Learning Across Heterogeneous Multi-objective Preferences
Machine Learning (CS)
Helps AI learn from many users without seeing their private data.