Momentum Tracking: Momentum Acceleration for Decentralized Deep Learning on Heterogeneous Data

Published: 20 Sept 2023, Last Modified: 20 Sept 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: SGD with momentum is one of the key components for improving the performance of neural networks. For decentralized learning, a straightforward approach using momentum is Distributed SGD (DSGD) with momentum (DSGDm). However, DSGDm performs worse than DSGD when the data distributions are statistically heterogeneous. Recently, several studies have addressed this issue and proposed methods with momentum that are more robust to data heterogeneity than DSGDm, although their convergence rates remain dependent on data heterogeneity and deteriorate when the data distributions are heterogeneous. In this study, we propose Momentum Tracking, which is a method with momentum whose convergence rate is proven to be independent of data heterogeneity. More specifically, we analyze the convergence rate of Momentum Tracking in the setting where the objective function is non-convex and the stochastic gradient is used. Then, we identify that it is independent of data heterogeneity for any momentum coefficient $\beta \in [0, 1)$. Through experiments, we demonstrate that Momentum Tracking is more robust to data heterogeneity than the existing decentralized learning methods with momentum and can consistently outperform these existing methods when the data distributions are heterogeneous.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We made the following changes in the camera-ready version. * We added pseudo-codes of QG-DSGDm and DecentLaM in Sec. A. * We added the discussion of Lu et. al. 2021 and Yuan el al., 2022 in Sec B.2. * We cited Zhao et. al. 2022. ### Reference Lu, Y. and De Sa, C. (2021). Optimal complexity in decentralized training. In ICML. Yuan et al., (2022). Revisiting optimal convergence rate for smooth and non-convex stochastic decentralized optimization. In NeurIPS. Zhao et. al., (2022), Beer: Fast o(1/T) rate for decentralized nonconvex optimization with communication compression. In NeurIPS.
Code: https://github.com/yukiTakezawa/MomentumTracking
Supplementary Material: zip
Assigned Action Editor: ~Peter_Richtarik1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1151
Loading