Compressed Decentralized Momentum Stochastic Gradient Methods for Nonconvex Optimization

Published: 07 Aug 2025, Last Modified: 07 Aug 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this paper, we design two compressed decentralized algorithms for solving nonconvex stochastic optimization under two different scenarios. Both algorithms adopt a momentum technique to achieve fast convergence and a message-compression technique to save communication costs. Though momentum acceleration and compressed communication have been used in literature, it is highly nontrivial to theoretically prove the effectiveness of their composition in a decentralized algorithm that can maintain the benefits of both sides, because of the need to simultaneously control the consensus error, the compression error, and the bias from the momentum gradient. For the scenario where gradients are bounded, our proposal is a compressed decentralized adaptive method. To the best of our knowledge, this is the first decentralized adaptive stochastic gradient method with compressed communication. For the scenario of data heterogeneity without bounded gradients, our proposal is a compressed decentralized heavy-ball method, which applies a gradient tracking technique to address the challenge of data heterogeneity. Notably, both methods achieve an optimal convergence rate, and they can achieve linear speed up and adopt topology-independent algorithmic parameters within a certain regime of the user-specified error tolerance. Superior empirical performance is observed over state-of-the-art methods on training deep neural networks (DNNs) and Transformers.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. More detailed comparisons to existing works are added in Section D in Appendix. In Section D.1, we added term-by-term comparison with Prior Work in Table 1, through which we explain the comparison for each column in Table 1. In Section D.2, we add more related work and compare our two methods to four more existing methods. 2. In Section E in Appendix, we include additional numerical plots. 3. In Section 3.1, we add more descriptions about Algorithm 1 by explaining what each line of the algorithm does; in Section 3.2, we added one paragraph explaining what each line of Algorithm 2 does.
Video: https://drive.google.com/file/d/1PSRcCIthgSMPr0Zkev6aUcI_jY4HyUi8/view?usp=sharing
Code: https://github.com/DecentralizedMethods/DAMSCo_DaSHCo
Assigned Action Editor: ~Anastasios_Kyrillidis2
Submission Number: 4511
Loading