Score: 0

Nesterov-Accelerated Robust Federated Learning Over Byzantine Adversaries

Published: November 4, 2025 | arXiv ID: 2511.02657v1

By: Lihan Xu , Yanjie Dong , Gang Wang and more

Potential Business Impact:

Protects shared computer learning from bad actors.

Business Areas:
A/B Testing Data and Analytics

We investigate robust federated learning, where a group of workers collaboratively train a shared model under the orchestration of a central server in the presence of Byzantine adversaries capable of arbitrary and potentially malicious behaviors. To simultaneously enhance communication efficiency and robustness against such adversaries, we propose a Byzantine-resilient Nesterov-Accelerated Federated Learning (Byrd-NAFL) algorithm. Byrd-NAFL seamlessly integrates Nesterov's momentum into the federated learning process alongside Byzantine-resilient aggregation rules to achieve fast and safeguarding convergence against gradient corruption. We establish a finite-time convergence guarantee for Byrd-NAFL under non-convex and smooth loss functions with relaxed assumption on the aggregated gradients. Extensive numerical experiments validate the effectiveness of Byrd-NAFL and demonstrate the superiority over existing benchmarks in terms of convergence speed, accuracy, and resilience to diverse Byzantine attack strategies.

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)