Empirical Analysis of Privacy-Fairness-Accuracy Trade-offs in Federated Learning: A Step Towards Responsible AI
By: Dawood Wasif , Dian Chen , Sindhuja Madabushi and more
Potential Business Impact:
Keeps data private while making AI fair.
Federated Learning (FL) enables collaborative model training while preserving data privacy; however, balancing privacy preservation (PP) and fairness poses significant challenges. In this paper, we present the first unified large-scale empirical study of privacy-fairness-utility trade-offs in FL, advancing toward responsible AI deployment. Specifically, we systematically compare Differential Privacy (DP), Homomorphic Encryption (HE), and Secure Multi-Party Computation (SMC) with fairness-aware optimizers including q-FedAvg, q-MAML, Ditto, evaluating their performance under IID and non-IID scenarios using benchmark (MNIST, Fashion-MNIST) and real-world datasets (Alzheimer's MRI, credit-card fraud detection). Our analysis reveals HE and SMC significantly outperform DP in achieving equitable outcomes under data skew, although at higher computational costs. Remarkably, we uncover unexpected interactions: DP mechanisms can negatively impact fairness, and fairness-aware optimizers can inadvertently reduce privacy effectiveness. We conclude with practical guidelines for designing robust FL systems that deliver equitable, privacy-preserving, and accurate outcomes.
Similar Papers
Emerging Paradigms for Securing Federated Learning Systems
Cryptography and Security
Makes AI learn from data without seeing it.
A Privacy-Preserving Federated Learning Method with Homomorphic Encryption in Omics Data
Cryptography and Security
Keeps medical secrets safe, still finds cures.
Convergence-Privacy-Fairness Trade-Off in Personalized Federated Learning
Machine Learning (CS)
Keeps private data safe while learning.