Keywords: Second-order Momentum, Linear Minimization Oracle, Stochastic Optimization
Abstract: The use of momentum in stochastic optimization algorithms has shown empirical success across a range of machine learning tasks.
Recently, a new class of momentum-based stochastic algorithms has emerged within the Linear Minimization Oracle (LMO) framework--leading to methods such as Muon, Scion, and Gluon--for effectively solving deep neural network training problems. However, traditional stochastic momentum methods offer convergence guarantees no better than $\mathcal{O}(1/K^{1/4})$. While several approaches--such as Hessian-Corrected Momentum (HCM)--have aimed to improve this rate, their theoretical results are generally restricted to the Euclidean norm setting. This limitation hinders their applicability in the problems where arbitrary norms are often required. In this paper, we extend the LMO-based framework by integrating HCM, and provide the convergence guarantees under relaxed smoothness and arbitrary norm settings. Specifically, we establish improved convergence rates of $\mathcal{O}(1/K^{1/3})$ for HCM, thereby surpassing the classical momentum rate and allowing the algorithms to better adapt to the geometry of the problem. Experimental results on training Multi-Layer Perceptrons (MLPs) and Long Short-Term Memory (LSTM) networks support our theoretical findings, demonstrating that the proposed LMO-based algorithms with HCM significantly outperform their vanilla algorithms with traditional momentum.
Primary Area: optimization
Submission Number: 17645
Loading