Score: 3

Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization

Published: August 7, 2025 | arXiv ID: 2508.04950v1

By: Wei Liu , Anweshit Panda , Ujwal Pandey and more

BigTech Affiliations: IBM

Potential Business Impact:

Makes computers learn faster with less data.

In this paper, we design two compressed decentralized algorithms for solving nonconvex stochastic optimization under two different scenarios. Both algorithms adopt a momentum technique to achieve fast convergence and a message-compression technique to save communication costs. Though momentum acceleration and compressed communication have been used in literature, it is highly nontrivial to theoretically prove the effectiveness of their composition in a decentralized algorithm that can maintain the benefits of both sides, because of the need to simultaneously control the consensus error, the compression error, and the bias from the momentum gradient. For the scenario where gradients are bounded, our proposal is a compressed decentralized adaptive method. To the best of our knowledge, this is the first decentralized adaptive stochastic gradient method with compressed communication. For the scenario of data heterogeneity without bounded gradients, our proposal is a compressed decentralized heavy-ball method, which applies a gradient tracking technique to address the challenge of data heterogeneity. Notably, both methods achieve an optimal convergence rate, and they can achieve linear speed up and adopt topology-independent algorithmic parameters within a certain regime of the user-specified error tolerance. Superior empirical performance is observed over state-of-the-art methods on training deep neural networks (DNNs) and Transformers.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
49 pages

Category
Computer Science:
Machine Learning (CS)