A Linearly Convergent Proximal Gradient Algorithm for Decentralized OptimizationDownload PDF

Sulaiman A. Alghunaim, Kun Yuan, Ali H. Sayed

06 Sept 2019 (modified: 05 May 2023)NeurIPS 2019Readers: Everyone
Abstract: Decentralized optimization is a promising paradigm that finds various applications in engineering and machine learning. This work studies decentralized composite optimization problems with a non-smooth regularization term. Most existing gradient-based proximal decentralized methods are shown to converge to the desired solution with sublinear rates, and it remains unclear how to prove the linear convergence for this family of methods when the objective function is strongly convex. To tackle this problem, this work considers the non-smooth regularization term to be common across all networked agents, which is the case for most centralized machine learning implementations. Under this scenario, we design a proximal gradient decentralized algorithm whose fixed point coincides with the desired minimizer. We then provide a concise proof that establishes its linear convergence. In the absence of the non-smooth term, our analysis technique covers some well known decentralized algorithms such as EXTRA and DIGing.
CMT Num: 1614
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview