Corrected with the Latest Version: Make Robust Asynchronous Federated Learning Possible
By: Chaoyi Lu , Yiding Sun , Pengbo Li and more
Potential Business Impact:
Makes AI learn faster without mistakes.
As an emerging paradigm of federated learning, asynchronous federated learning offers significant speed advantages over traditional synchronous federated learning. Unlike synchronous federated learning, which requires waiting for all clients to complete updates before aggregation, asynchronous federated learning aggregates the models that have arrived in realtime, greatly improving training speed. However, this mechanism also introduces the issue of client model version inconsistency. When the differences between models of different versions during aggregation become too large, it may lead to conflicts, thereby reducing the models accuracy. To address this issue, this paper proposes an asynchronous federated learning version correction algorithm based on knowledge distillation, named FedADT. FedADT applies knowledge distillation before aggregating gradients, using the latest global model to correct outdated information, thus effectively reducing the negative impact of outdated gradients on the training process. Additionally, FedADT introduces an adaptive weighting function that adjusts the knowledge distillation weight according to different stages of training, helps mitigate the misleading effects caused by the poorer performance of the global model in the early stages of training. This method significantly improves the overall performance of asynchronous federated learning without adding excessive computational overhead. We conducted experimental comparisons with several classical algorithms, and the results demonstrate that FedADT achieves significant improvements over other asynchronous methods and outperforms all methods in terms of convergence speed.
Similar Papers
FedADP: Unified Model Aggregation for Federated Learning with Heterogeneous Model Architectures
Machine Learning (CS)
Lets different computers learn together better.
Stragglers Can Contribute More: Uncertainty-Aware Distillation for Asynchronous Federated Learning
Machine Learning (CS)
Helps AI learn faster from many computers.
FedDuA: Doubly Adaptive Federated Learning
Machine Learning (CS)
Teaches computers faster without sharing private info.