Local Linear Convergence of Projected Gradient Descent: A Discrete and Continuous Analysis

03 Sept 2025 (modified: 19 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Projected gradient descent
Abstract: Projected Gradient Descent (PGD) method has been successfully applied to various machine learning problems. The prior works demonstrate that PGD, as a classical discrete iterative method, has sublinear convergence in the case that the objective function is convex and smooth. In this paper, we explore the local linear convergence properties of PGD under this case from both discrete and continuous perspectives. Specifically, we focus on optimization problems with a general convex and smooth objective function constrained by $\mathbb{B}(\pmb{0}, \epsilon)$, and present the following principal results: $\textbf{(I)}$ We derive an ordinary differential equation (ODE) that arises as the limit of PGD; $\textbf{(II)}$ We establish convergence rate bounds for PGD in both discrete-time and continuous-time scenarios, with the continuous-time analysis motivated by the derived ODE. The bounds in both scenarios support each other and consistently indicate that PGD achieves a local linear convergence rate. Finally, we conduct experiments to validate theoretical results $\textbf{(I)}$ and $\textbf{(II)}$, and the experimental outcomes closely align with our theoretical findings.
Primary Area: optimization
Submission Number: 1177
Loading