Delayed Momentum Aggregation: Communication-efficient Byzantine-robust Federated Learning with Partial Participation
By: Kaoru Otsuka, Yuki Takezawa, Makoto Yamada
Potential Business Impact:
Keeps AI learning safe from bad data.
Federated Learning (FL) allows distributed model training across multiple clients while preserving data privacy, but it remains vulnerable to Byzantine clients that exhibit malicious behavior. While existing Byzantine-robust FL methods provide strong convergence guarantees (e.g., to a stationary point in expectation) under Byzantine attacks, they typically assume full client participation, which is unrealistic due to communication constraints and client availability. Under partial participation, existing methods fail immediately after the sampled clients contain a Byzantine majority, creating a fundamental challenge for sparse communication. First, we introduce delayed momentum aggregation, a novel principle where the server aggregates the most recently received gradients from non-participating clients alongside fresh momentum from active clients. Our optimizer D-Byz-SGDM (Delayed Byzantine-robust SGD with Momentum) implements this delayed momentum aggregation principle for Byzantine-robust FL with partial participation. Then, we establish convergence guarantees that recover previous full participation results and match the fundamental lower bounds we prove for the partial participation setting. Experiments on deep learning tasks validated our theoretical findings, showing stable and robust training under various Byzantine attacks.
Similar Papers
Byzantine-Robust Federated Learning with Learnable Aggregation Weights
Machine Learning (CS)
Keeps smart learning safe from bad guys.
Byzantine-Resilient Federated Learning via Distributed Optimization
Machine Learning (CS)
Protects computer learning from bad guys.
Robust Federated Learning under Adversarial Attacks via Loss-Based Client Clustering
Machine Learning (CS)
Protects smart learning from bad data.