The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Learning rate schedules in deep learning behave strikingly similar to what convex optimization theory predicts
Abstract: We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant schedule with linear cooldown; in particular, the practical benefit of cooldown is reflected in the bound due to the absence of logarithmic terms. Further, we show that this surprisingly close match between optimization theory and practice can be exploited for learning-rate tuning: we achieve noticeable improvements for training 124M and 210M Llama-type models by (i) extending the schedule for continued training with optimal learning-rate, and (ii) transferring the optimal learning-rate across schedules.
Lay Summary: The problem of training machine learning models is often formulated as a complicated optimization problem, which is generally handled via iterative optimization algorithms. A particularly crucial stage in this procedure is the choice of the size of the steps taken by the optimization algorithm (this is called a "learning-rate schedule"). We show that many empirical effects of these schedules can be explained by a theoretical model for convex optimization. This is surprising, as it is known that the practical training problems are not convex; however, the theory appears to still match the observed behaviors. It is also surprising, as optimization theory often fails to make accurate predictions about the real-world behavior of optimization algorithms in machine learning. As an application, we can use our theoretical model to design better schedules for practical training scenarios. This is more efficient than a trial-and-error approach and helps to reduce the computational burden of the training procedure.
Link To Code: https://github.com/fabian-sp/lr-scheduling
Primary Area: Optimization
Keywords: Learning rate schedules, convex optimization theory, large model training, continual training
Submission Number: 244
Loading