Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
By: Baolei Zhang , Minghong Fang , Zhuqing Liu and more
Potential Business Impact:
Protects private data while training shared computer brains.
Federated Learning (FL) allows multiple clients to collaboratively train a model without sharing their private data. However, FL is vulnerable to Byzantine attacks, where adversaries manipulate client models to compromise the federated model, and privacy inference attacks, where adversaries exploit client models to infer private data. Existing defenses against both backdoor and privacy inference attacks introduce significant computational and communication overhead, creating a gap between theory and practice. To address this, we propose ABBR, a practical framework for Byzantine-robust and privacy-preserving FL. We are the first to utilize dimensionality reduction to speed up the private computation of complex filtering rules in privacy-preserving FL. Additionally, we analyze the accuracy loss of vector-wise filtering in low-dimensional space and introduce an adaptive tuning strategy to minimize the impact of malicious models that bypass filtering on the global model. We implement ABBR with state-of-the-art Byzantine-robust aggregation rules and evaluate it on public datasets, showing that it runs significantly faster, has minimal communication overhead, and maintains nearly the same Byzantine-resilience as the baselines.
Similar Papers
Efficient Byzantine-Robust Privacy-Preserving Federated Learning via Dimension Compression
Cryptography and Security
Keeps private data safe while training AI.
Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective
Cryptography and Security
Protects private recommendations from fake user attacks.
Byzantine-Robust Federated Learning with Learnable Aggregation Weights
Machine Learning (CS)
Keeps smart learning safe from bad guys.