Benign Underfitting of Stochastic Gradient DescentDownload PDF

Published: 31 Oct 2022, Last Modified: 12 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Stochastic Gradient Descent, Convex Optimization, Generalization Error Bounds, Algorithmic Stability
TL;DR: We establish there exist stochastic convex optimization problems where SGD exhibits a constant generalization gap and fails to minimize the empirical risk.
Abstract: We study to what extent may stochastic gradient descent (SGD) be understood as a ``conventional'' learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, $\textit{without}$-replacement) SGD is classically known to minimize the population risk at rate $O(1/\sqrt n)$, and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of $\Omega(1)$. Consequently, it turns out that SGD is not algorithmically stable in $\textit{any}$ sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related $\textit{with}$-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results.
Supplementary Material: pdf
12 Replies