Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization

Published: 22 Sept 2025, Last Modified: 01 Dec 2025NeurIPS 2025 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: stochastic optimization, schedulers, learning rate, cosine, annealing, tuning, robustness, grid search
TL;DR: We prove that annealing schedules are more robust to multiplicative learning rate misspecification in stochastic optimization and validate this experimentally
Abstract: The learning rate in stochastic gradient methods is a critical hyperparameter that is notoriously costly to tune via standard grid search, especially for training modern large-scale models with billions of parameters. We identify a theoretical advantage of learning rate annealing schemes that decay the learning rate to zero at a polynomial rate, such as the widely-used cosine schedule, by demonstrating their increased robustness to initial parameter misspecification due to a coarse grid search. We present an analysis in a stochastic convex optimization setup demonstrating that the convergence rate of stochastic gradient descent with annealed schedules depends sublinearly on the multiplicative misspecification factor $\rho$ (i.e., the grid resolution), achieving a rate of $\smash{O(\rho^{1/(2p+1)}/\sqrt{T})}$ where $p$ is the degree of polynomial decay and $T$ is the number of steps. This is in contrast to the $\smash{O(\rho/\sqrt{T})}$ rate that arises with fixed stepsizes and exhibits a linear dependence on $\rho$. Experiments confirm the increased robustness compared to tuning with a fixed stepsize, that has significant implications for the computational overhead of hyperparameter search in practical training scenarios.
Submission Number: 28
Loading