Extending $\mu$P: Spectral Conditions for Feature Learning Across Optimizers

ICLR 2026 Conference Submission21622 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Maximal Update Parameterization, Hyperparameter transfer, scalable training, adaptive optimizers, scaling laws, spectral learning, scalable optimization for ML, optimization for deep learning
Abstract: Several variations of adaptive first-order and second-order methods have been proposed to accelerate and scale the training of large language models. The performance of optimization routines is highly sensitive to the choice of hyperparameters (HPs), which are computationally expensive to tune for large-scale models. Maximal update parameterization $(\mu$P$)$ is a set of scaling rules which aims to make the optimal HPs independent of the model size, thereby allowing the HPs tuned on a smaller (computationally cheaper) model to be transferred to train a larger, target model. Despite promising results for SGD and Adam, deriving $\mu$P for other optimizers is challenging because the underlying tensor programming approach is difficult to grasp. Building on recent work that introduced spectral conditions as an alternative to tensor programs, we propose a novel framework to derive $\mu$P for a broader class of optimizers, including AdamW, ADOPT, LAMB, Sophia and Shampoo. We validate our derivations on different benchmark models and demonstrate zero-shot learning rate transfer across increasing model width for the above optimizers. Further, we provide empirical insights into depth-scaling parameterization for these optimizers.
Supplementary Material: pdf
Primary Area: optimization
Submission Number: 21622
Loading