Fast Last-Iterate Convergence of SGD in the Smooth Interpolation Regime

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: last-iterate, low-noise, interpolation, realizable, sgd, stochastic, convex, smooth
TL;DR: We present a sharp last-iterate analysis of SGD on smooth convex losses in the interpolation regime, extending prior results beyond linear regression and improving known rates for large, constant stepsizes.
Abstract: We study population convergence guarantees of stochastic gradient descent (SGD) for smooth convex objectives in the interpolation regime, where the noise at optimum is zero or near zero. The behavior of the last iterate of SGD in this setting---particularly with large (constant) stepsizes---has received growing attention in recent years due to implications for the training of over-parameterized models, as well as to analyzing forgetting in continual learning and to understanding the convergence of the randomized Kaczmarz method for solving linear systems. We establish that after $T$ steps of SGD on $\beta$-smooth convex loss functions with stepsize $0 < \eta < 2/\beta$, the last iterate exhibits expected excess risk $\widetilde{O}(\tfrac{1}{\eta (2-\beta \eta) T^{1-\beta\eta/2}} + \tfrac{\eta}{(2-\beta\eta)^2} T^{\beta\eta/2} \sigma_\star^2)$, where $\sigma_\star^2$ denotes the variance of the stochastic gradients at the optimum. In particular, for a well-tuned stepsize we obtain a near optimal $\widetilde{O}(1/T + \sigma_\star/\sqrt T)$ rate for the last iterate, extending the results of Varre et al. (2021) beyond least squares regression; and when $\sigma_\star=0$ we obtain a rate of $O(1/\sqrt T)$ with $\eta=1/\beta$, improving upon the best-known $O(T^{-1/4})$ rate recently established by Evron et al. (2025) in the special case of realizable linear regression.
Primary Area: Optimization (e.g., convex and non-convex, stochastic, robust)
Submission Number: 23315
Loading