Mitigating Persistent Client Dropout in Asynchronous Decentralized Federated Learning
By: Ignacy Stępka , Nicholas Gisolfi , Kacper Trębacz and more
Potential Business Impact:
Recovers AI learning from device dropouts
We consider the problem of persistent client dropout in asynchronous Decentralized Federated Learning (DFL). Asynchronicity and decentralization obfuscate information about model updates among federation peers, making recovery from a client dropout difficult. Access to the number of learning epochs, data distributions, and all the information necessary to precisely reconstruct the missing neighbor's loss functions is limited. We show that obvious mitigations do not adequately address the problem and introduce adaptive strategies based on client reconstruction. We show that these strategies can effectively recover some performance loss caused by dropout. Our work focuses on asynchronous DFL with local regularization and differs substantially from that in the existing literature. We evaluate the proposed methods on tabular and image datasets, involve three DFL algorithms, and three data heterogeneity scenarios (iid, non-iid, class-focused non-iid). Our experiments show that the proposed adaptive strategies can be effective in maintaining robustness of federated learning, even if they do not reconstruct the missing client's data precisely. We also discuss the limitations and identify future avenues for tackling the problem of client dropout.
Similar Papers
Adaptive Decentralized Federated Learning for Robust Optimization
Machine Learning (CS)
Fixes computer learning when some data is bad.
Adaptive Decentralized Federated Learning for Robust Optimization
Machine Learning (CS)
Fixes computer learning when some data is bad.
Fault-Tolerant Decentralized Distributed Asynchronous Federated Learning with Adaptive Termination Detection
Distributed, Parallel, and Cluster Computing
Lets computers learn together without sharing private data.