PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning Against Data Poisoning Attacks
By: Hongliang Zhang , Jiguo Yu , Fenghua Xu and more
Potential Business Impact:
Protects private data while training smart computer programs.
Privacy-Preserving Federated Learning (PPFL) enables multiple clients to collaboratively train models by submitting secreted model updates. Nonetheless, PPFL is vulnerable to data poisoning attacks due to its distributed training paradigm in cross-silo scenarios. Existing solutions have struggled to improve the performance of PPFL under poisoned Non-Independent and Identically Distributed (Non-IID) data. To address the issues, this paper proposes a privacy-preserving federated prototype learning framework, named PPFPL, which enhances the cross-silo FL performance against poisoned Non-IID data while protecting client privacy. Specifically, we adopt prototypes as client-submitted model updates to eliminate the impact of poisoned data distributions. In addition, we design a secure aggregation protocol utilizing homomorphic encryption to achieve Byzantine-robust aggregation on two servers, significantly reducing the impact of malicious clients. Theoretical analyses confirm the convergence and privacy of PPFPL. Experimental results on public datasets show that PPFPL effectively resists data poisoning attacks under Non-IID settings.
Similar Papers
Fast, Private, and Protected: Safeguarding Data Privacy and Defending Against Model Poisoning Attacks in Federated Learning
Machine Learning (CS)
Protects shared computer learning from bad guys.
Privacy Preserving Machine Learning Model Personalization through Federated Personalized Learning
Machine Learning (CS)
Keeps your private data safe when AI learns.
FedPPA: Progressive Parameter Alignment for Personalized Federated Learning
Machine Learning (CS)
Helps computers learn from everyone's private info.