Asynchronous Decentralized SGD under Non-Convexity: A Block-Coordinate Descent Framework
By: Yijie Zhou, Shi Pu
Potential Business Impact:
Helps computers learn together faster, even with slow connections.
Decentralized optimization has become vital for leveraging distributed data without central control, enhancing scalability and privacy. However, practical deployments face fundamental challenges due to heterogeneous computation speeds and unpredictable communication delays. This paper introduces a refined model of Asynchronous Decentralized Stochastic Gradient Descent (ADSGD) under practical assumptions of bounded computation and communication times. To understand the convergence of ADSGD, we first analyze Asynchronous Stochastic Block Coordinate Descent (ASBCD) as a tool, and then show that ADSGD converges under computation-delay-independent step sizes. The convergence result is established without assuming bounded data heterogeneity. Empirical experiments reveal that ADSGD outperforms existing methods in wall-clock convergence time across various scenarios. With its simplicity, efficiency in memory and communication, and resilience to communication and computation delays, ADSGD is well-suited for real-world decentralized learning tasks.
Similar Papers
Enhancing Parallelism in Decentralized Stochastic Convex Optimization
Machine Learning (CS)
Lets more computers learn together faster.
Ringleader ASGD: The First Asynchronous SGD with Optimal Time Complexity under Data Heterogeneity
Optimization and Control
Trains AI faster on phones with different speeds.
Stochastic Adaptive Gradient Descent Without Descent
Machine Learning (CS)
Makes computer learning faster without needing extra settings.