Improved Convergence in Parameter-Agnostic Error Feedback through Momentum
By: Abdurakhmon Sadiev , Yury Demidovich , Igor Sokolov and more
Potential Business Impact:
Makes AI learn faster without needing expert settings.
Communication compression is essential for scalable distributed training of modern machine learning models, but it often degrades convergence due to the noise it introduces. Error Feedback (EF) mechanisms are widely adopted to mitigate this issue of distributed compression algorithms. Despite their popularity and training efficiency, existing distributed EF algorithms often require prior knowledge of problem parameters (e.g., smoothness constants) to fine-tune stepsizes. This limits their practical applicability especially in large-scale neural network training. In this paper, we study normalized error feedback algorithms that combine EF with normalized updates, various momentum variants, and parameter-agnostic, time-varying stepsizes, thus eliminating the need for problem-dependent tuning. We analyze the convergence of these algorithms for minimizing smooth functions, and establish parameter-agnostic complexity bounds that are close to the best-known bounds with carefully-tuned problem-dependent stepsizes. Specifically, we show that normalized EF21 achieve the convergence rate of near ${O}(1/T^{1/4})$ for Polyak's heavy-ball momentum, ${O}(1/T^{2/7})$ for Iterative Gradient Transport (IGT), and ${O}(1/T^{1/3})$ for STORM and Hessian-corrected momentum. Our results hold with decreasing stepsizes and small mini-batches. Finally, our empirical experiments confirm our theoretical insights.
Similar Papers
Tight analyses of first-order methods with error feedback
Machine Learning (CS)
Makes computers learn faster by talking less.
Composite Optimization with Error Feedback: the Dual Averaging Approach
Optimization and Control
Makes computers learn faster with less data.
Safe-EF: Error Feedback for Nonsmooth Constrained Optimization
Machine Learning (CS)
Robots learn faster, safer, with less data sent.