Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning
By: Xiyu Zhao , Qimei Cui , Weicai Li and more
Potential Business Impact:
Keeps private data safe while learning.
Personalized federated learning (PFL), e.g., the renowned Ditto, strikes a balance between personalization and generalization by conducting federated learning (FL) to guide personalized learning (PL). While FL is unaffected by personalized model training, in Ditto, PL depends on the outcome of the FL. However, the clients' concern about their privacy and consequent perturbation of their local models can affect the convergence and (performance) fairness of PL. This paper presents PFL, called DP-Ditto, which is a non-trivial extension of Ditto under the protection of differential privacy (DP), and analyzes the trade-off among its privacy guarantee, model convergence, and performance distribution fairness. We also analyze the convergence upper bound of the personalized models under DP-Ditto and derive the optimal number of global aggregations given a privacy budget. Further, we analyze the performance fairness of the personalized models, and reveal the feasibility of optimizing DP-Ditto jointly for convergence and fairness. Experiments validate our analysis and demonstrate that DP-Ditto can surpass the DP-perturbed versions of the state-of-the-art PFL models, such as FedAMP, pFedMe, APPLE, and FedALA, by over 32.71% in fairness and 9.66% in accuracy.
Similar Papers
Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI
Machine Learning (CS)
Keeps data private while making AI fair.
Adaptive Latent-Space Constraints in Personalized FL
Machine Learning (CS)
Helps AI learn better from different data.
CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks
Machine Learning (CS)
Makes AI learn better from everyone's unique data.