Divergence-Based Adaptive Aggregation for Byzantine Robust Federated Learning
By: Bingnan Xiao , Feng Zhu , Jingjing Zhang and more
Potential Business Impact:
Helps AI learn faster and safer from many computers.
Inherent client drifts caused by data heterogeneity, as well as vulnerability to Byzantine attacks within the system, hinder effective model training and convergence in federated learning (FL). This paper presents two new frameworks, named DiveRgence-based Adaptive aGgregation (DRAG) and Byzantine-Resilient DRAG (BR-DRAG), to mitigate client drifts and resist attacks while expediting training. DRAG designs a reference direction and a metric named divergence of degree to quantify the deviation of local updates. Accordingly, each worker can align its local update via linear calibration without extra communication cost. BR-DRAG refines DRAG under Byzantine attacks by maintaining a vetted root dataset at the server to produce trusted reference directions. The workers' updates can be then calibrated to mitigate divergence caused by malicious attacks. We analytically prove that DRAG and BR-DRAG achieve fast convergence for non-convex models under partial worker participation, data heterogeneity, and Byzantine attacks. Experiments validate the effectiveness of DRAG and its superior performance over state-of-the-art methods in handling client drifts, and highlight the robustness of BR-DRAG in maintaining resilience against data heterogeneity and diverse Byzantine attacks.
Similar Papers
Coded Robust Aggregation for Distributed Learning under Byzantine Attacks
Machine Learning (CS)
Protects computer learning from bad data.
Practical Framework for Privacy-Preserving and Byzantine-robust Federated Learning
Cryptography and Security
Protects private data while training shared computer brains.
ProDiGy: Proximity- and Dissimilarity-Based Byzantine-Robust Federated Learning
Machine Learning (CS)
Protects computer learning from bad data.