Stacey: Promoting Stochastic Steepest Descent via Accelerated $\ell_p$-Smooth Nonconvex Optimization

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose Stacey, an accelerated $\ell_p$ steepest descent algorithm that leverages non-Euclidean geometry to achieve faster convergence and higher accuracy.
Abstract: While popular optimization methods such as SGD, AdamW, and Lion depend on steepest descent updates in either $\ell_2$ or $\ell_\infty$ norms, there remains a critical gap in handling the non-Euclidean structure observed in modern deep networks training. In this work, we address this need by introducing a new accelerated $\ell_p$ steepest descent algorithm, called Stacey, which uses interpolated primal-dual iterate sequences to effectively navigate non-Euclidean smooth optimization tasks. In addition to providing novel theoretical guarantees for the foundations of our algorithm, we empirically compare our approach against these popular methods on tasks including image classification and language model (LLM) pretraining, demonstrating both faster convergence and higher final accuracy. We further evaluate different values of $p$ across various models and datasets, underscoring the importance and efficiency of non-Euclidean approaches over standard Euclidean methods. Code can be found at https://github.com/xinyuluo8561/Stacey.
Lay Summary: Many popular training methods, such as SGD and Adam, rely on problem geometries not always reflected in modern deep learning. We introduce Stacey, a new primal-dual steepest descent algorithm that combines updates in different geometries to further accelerate optimization. Stacey is both theoretically and empirically justified, outperforming existing methods on tasks like image classification and language model pretraining.
Primary Area: Optimization->Stochastic
Keywords: Non-convex Optimization, Non-Euclidean Acceleration, Stochastic Steepest Descent
Submission Number: 12883
Loading