Distributed Stochastic Momentum Tracking with Local Updates: Achieving Optimal Communication and Iteration Complexities
By: Kun Huang, Shi Pu
Potential Business Impact:
Makes computers work together faster and smarter.
We propose Local Momentum Tracking (LMT), a novel distributed stochastic gradient method for solving distributed optimization problems over networks. To reduce communication overhead, LMT enables each agent to perform multiple local updates between consecutive communication rounds. Specifically, LMT integrates local updates with the momentum tracking strategy and the Loopless Chebyshev Acceleration (LCA) technique. We demonstrate that LMT achieves linear speedup with respect to the number of local updates as well as the number of agents for minimizing smooth objective functions. Moreover, with sufficiently many local updates ($Q\geq Q^*$), LMT attains the optimal communication complexity. For a moderate number of local updates ($Q\in[1,Q^*]$), it achieves the optimal iteration complexity. To our knowledge, LMT is the first method that enjoys such properties.
Similar Papers
Distributed Stochastic Momentum Tracking with Local Updates: Achieving Optimal Communication and Iteration Complexities
Optimization and Control
Makes computers work together faster and use less data.
Better LMO-based Momentum Methods with Second-Order Information
Optimization and Control
Makes computer learning faster and better.
Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization
Machine Learning (CS)
Makes computers learn faster with less data.