Nesterov-Accelerated Robust Federated Learning Over Byzantine Adversaries
By: Lihan Xu , Yanjie Dong , Gang Wang and more
Potential Business Impact:
Protects shared computer learning from bad actors.
We investigate robust federated learning, where a group of workers collaboratively train a shared model under the orchestration of a central server in the presence of Byzantine adversaries capable of arbitrary and potentially malicious behaviors. To simultaneously enhance communication efficiency and robustness against such adversaries, we propose a Byzantine-resilient Nesterov-Accelerated Federated Learning (Byrd-NAFL) algorithm. Byrd-NAFL seamlessly integrates Nesterov's momentum into the federated learning process alongside Byzantine-resilient aggregation rules to achieve fast and safeguarding convergence against gradient corruption. We establish a finite-time convergence guarantee for Byrd-NAFL under non-convex and smooth loss functions with relaxed assumption on the aggregated gradients. Extensive numerical experiments validate the effectiveness of Byrd-NAFL and demonstrate the superiority over existing benchmarks in terms of convergence speed, accuracy, and resilience to diverse Byzantine attack strategies.
Similar Papers
Byzantine-Robust Federated Learning with Learnable Aggregation Weights
Machine Learning (CS)
Keeps smart learning safe from bad guys.
Byzantine-Robust Federated Learning Using Generative Adversarial Networks
Cryptography and Security
Keeps AI learning safe from bad data.
Delayed Momentum Aggregation: Communication-efficient Byzantine-robust Federated Learning with Partial Participation
Machine Learning (CS)
Keeps AI learning safe from bad data.