Score: 1

PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning Against Data Poisoning Attacks

Published: April 4, 2025 | arXiv ID: 2504.03173v5

By: Hongliang Zhang , Jiguo Yu , Fenghua Xu and more

BigTech Affiliations: Weibo

Potential Business Impact:

Protects private data while training smart computer programs.

Business Areas:
Fraud Detection Financial Services, Payments, Privacy and Security

Privacy-Preserving Federated Learning (PPFL) enables multiple clients to collaboratively train models by submitting secreted model updates. Nonetheless, PPFL is vulnerable to data poisoning attacks due to its distributed training paradigm in cross-silo scenarios. Existing solutions have struggled to improve the performance of PPFL under poisoned Non-Independent and Identically Distributed (Non-IID) data. To address the issues, this paper proposes a privacy-preserving federated prototype learning framework, named PPFPL, which enhances the cross-silo FL performance against poisoned Non-IID data while protecting client privacy. Specifically, we adopt prototypes as client-submitted model updates to eliminate the impact of poisoned data distributions. In addition, we design a secure aggregation protocol utilizing homomorphic encryption to achieve Byzantine-robust aggregation on two servers, significantly reducing the impact of malicious clients. Theoretical analyses confirm the convergence and privacy of PPFPL. Experimental results on public datasets show that PPFPL effectively resists data poisoning attacks under Non-IID settings.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Cryptography and Security