Muon: Training and Trade-offs with Latent Attention and MoE

JSYS 2025 October Papers Submission2 Authors

30 Sept 2025 (modified: 02 Oct 2025)JSYS 2025 October Papers SubmissionEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: mixture of experts, efficient transformers, multi-head latent attention, sparse models, memory-efficient inference, muon, optimizers
Abstract: We present a comprehensive theoretical and empirical study of the \emph{Muon} optimizer for training transformers only with a small to medium decoder (30M - 200M parameters), with an emphasis on its mathematical foundations, convergence properties and interactions with modern architectural optimizations. Building on recent work showing Muon's scalability~\cite{liu2025muon,essentialai2025muon}, we provide rigorous theoretical analysis including: (i) convergence guarantees showing the $\mathcal{O}(1/\sqrt{T})$ rate under standard assumptions, (ii) spectral regularization properties that prevent gradient explosion, (iii) connection to natural gradient descent on the Stiefel manifold, and (iv) equivalence to steepest gradient descent under the spectral norm. Crucially, we demonstrate that Muon expands the Pareto frontier in the compute-time trade-off by maintaining superior data efficiency at large batch sizes, a key finding of~\cite{essentialai2025muon} that we validate across our model scales. Empirically, Muon reaches the target loss with 48--52\% of the training calculated by AdamW while maintaining or improving the final perplexity, consistent with larger-scale results. When combined with Multi-Head Latent Attention (MLA) and Mixture-of-Experts (MoE), we observe multiplicative efficiency gains: MLA+MoE+Muon achieves 68\% memory reduction and 3.2$\times$ inference speedup, while improving perplexity by 8--12\%. We provide detailed procedures on 15 architectural and optimizer components, stability analyzes across 100+ training runs, and practical implementation guidelines including Newton-Schulz coefficients $(3.4445, -4.7750, 2.0315)$ optimized by~\cite{su2024muonblog}. Our theoretical analysis and comprehensive experiments establish Muon as a principled, robust alternative to AdamW that particularly excels when combined with modern efficiency techniques and large-batch training regimes.
Area: Systems for ML and ML for systems
Type: Systemization of Knowledge (SoK)
Revision: No
Submission Number: 2
Loading