Beyond First-Order: Training LLMs with Stochastic Conjugate Subgradient and AdamW

Published: 28 Nov 2025, Last Modified: 30 Nov 2025NeurIPS 2025 Workshop MLxOREveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs Training, Stochastic Conjugate Subgradient, Beyond First Order
TL;DR: Training LLMs with Stochastic Conjugate Subgradient and AdamW
Abstract: Algorithms based on Stochastic Gradient-based Descent (SGD), have long been central to training large language models (LLMs). However, their effectiveness can be questionable, particularly in large-scale applications where empirical evidence suggests potential performance limitations. In response, this paper proposes a stochastic conjugate subgradient method together with adaptive sampling tailored specifically for training LLMs. The method not only achieves faster convergence per iteration but also demonstrates improved scalability compared to traditional SGD techniques. It leverages several fundamental concepts including adaptive sample complexity analysis, an adaptive method to choose step-sizes, as well as a stochastic conjugate subgradient approach to determine search directions and utilizing an AdamW-like algorithm to adaptively adjust step sizes. This approach preserves the key advantages of first-order methods while effectively addressing non-smoothness inherent in training LLMs. Experimental results show that the proposed method not only maintains, but in many cases surpasses, the scalability of traditional SGD techniques, significantly enhancing both the speed and accuracy of the optimization process.
Submission Number: 81
Loading