µnit Scaling: Simple and Scalable FP8 LLM Training

Published: 13 Jul 2025, Last Modified: 17 May 2025ICML 2025EveryoneRevisionsCC BY 4.0
Abstract: Large Language Model training with 8-bit floating point (FP8) formats promises significant efficiency improvements, but reduced numerical precision makes training challenging. It is currently possible to train in FP8 only if one is willing to tune various hyperparameters, reduce model scale, or accept the overhead of computing dynamic scale factors. We demonstrate simple, scalable FP8 training that requires no dynamic scaling factors or special hyperparameters, even at large model sizes. Our method, μnit Scaling (μS), also enables simple hyperparameter transfer across model widths, matched numerics across training and inference, and other desirable properties. μnit Scaling is straightforward to implement, consisting of a set of minimal interventions based on a first-principles analysis of common transformer operations. We validate our method by training models from 1B to 13B parameters, performing all hidden linear layer computations in FP8. We achieve quality equal to higher precision baselines while also training up to 33% faster.
Loading