Provable and Practical Online Learning Rate Adaptation with Hypergradient Descent

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: The first rigorous theoretical study of hypergradient descent
Abstract: This paper investigates the convergence properties of the hypergradient descent method ($\texttt{HDM}$), a 25-year-old heuristic originally proposed for adaptive stepsize selection in stochastic first-order methods. We provide the first rigorous convergence analysis of $\texttt{HDM}$ using the online learning framework and apply this analysis to develop a new state-of-the-art adaptive gradient methods with empirical and theoretical support. Notably, $\texttt{HDM}$ automatically identifies the optimal stepsize for the local optimization landscape and achieves local superlinear convergence. Our analysis explains the instability of $\texttt{HDM}$ reported in the literature and proposes efficient strategies to address it. We also develop two $\texttt{HDM}$ variants with heavy-ball and Nesterov momentum. Experiments on deterministic convex problems show $\texttt{HDM}$ with heavy-ball momentum ($\texttt{HDM-HB}$) exhibits robust performance and significantly outperforms other adaptive first-order methods. Moreover, $\texttt{HDM-HB}$ often matches the performance of $\texttt{L-BFGS}$, an efficient and practical quasi-Newton method, using less memory and cheaper iterations.
Lay Summary: Learning rate selection is a critical hyperparameter that significantly influences the convergence speed of first-order optimization algorithms, yet adaptively choosing it can be challenging. We developed an efficient learning rate update strategy based on online learning, proved its convergence guarantees, and investigate its convergence behavior. Our analysis resolves a 25-year-old heuristic that was proposed for adaptive step size selection. The resulting algorithm significantly outperforms the state-of-the-art.
Link To Code: https://github.com/udellgroup/hypergrad
Primary Area: Optimization->Convex
Keywords: First-order method, online convex optimization, stepsize scheduling
Submission Number: 5534
Loading