The Impact Analysis of Delays in Asynchronous Federated Learning with Data Heterogeneity for Edge Intelligence
By: Ziruo Hao , Zhenhua Cui , Tao Yang and more
Potential Business Impact:
Lets computers learn together even with slow connections.
Federated learning (FL) has provided a new methodology for coordinating a group of clients to train a machine learning model collaboratively, bringing an efficient paradigm in edge intelligence. Despite its promise, FL faces several critical challenges in practical applications involving edge devices, such as data heterogeneity and delays stemming from communication and computation constraints. This paper examines the impact of unknown causes of delay on training performance in an Asynchronous Federated Learning (AFL) system with data heterogeneity. Initially, an asynchronous error definition is proposed, based on which the solely adverse impact of data heterogeneity is theoretically analyzed within the traditional Synchronous Federated Learning (SFL) framework. Furthermore, Asynchronous Updates with Delayed Gradients (AUDG), a conventional AFL scheme, is discussed. Investigation into AUDG reveals that the negative influence of data heterogeneity is correlated with delays, while a shorter average delay from a specific client does not consistently enhance training performance. In order to compensate for the scenarios where AUDG are not adapted, Pseudo-synchronous Updates by Reusing Delayed Gradients (PSURDG) is proposed, and its theoretical convergence is analyzed. In both AUDG and PSURDG, only a random set of clients successfully transmits their updated results to the central server in each iteration. The critical difference between them lies in whether the delayed information is reused. Finally, both schemes are validated and compared through theoretical analysis and simulations, demonstrating more intuitively that discarding outdated information due to time delays is not always the best approach.
Similar Papers
Mitigating Participation Imbalance Bias in Asynchronous Federated Learning
Machine Learning (CS)
Makes AI learn better from many different computers.
Client Selection in Federated Learning with Data Heterogeneity and Network Latencies
Machine Learning (CS)
Makes smart computers learn faster from different data.
Empirical Analysis of Asynchronous Federated Learning on Heterogeneous Devices: Efficiency, Fairness, and Privacy Trade-offs
Distributed, Parallel, and Cluster Computing
Makes AI learn faster, but some devices lose more privacy.