µnit Scaling: Simple and Scalable FP8 LLM Training

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: μnit Scaling combines stable, highly efficient training via FP8 with significant cost savings through hyperparameter transfer.
Abstract: Large language model training with 8-bit floating point (FP8) formats promises significant efficiency improvements, but reduced numerical precision makes training challenging. It is currently possible to train in FP8 only if one is willing to tune various hyperparameters, reduce model scale, or accept the overhead of computing dynamic scale factors. We demonstrate simple, scalable FP8 training that requires no dynamic scaling factors or special hyperparameters, even at large model sizes. Our method, \textit{µnit Scaling (µS)}, also enables simple hyperparameter transfer across model widths, matched numerics across training and inference, and other desirable properties. µnit Scaling is straightforward to implement, consisting of a set of minimal interventions based on a first-principles analysis of transformer operations. We validate our method by training models with parameters ranging from 1B to 13B, performing all hidden linear layer computations in FP8. We achieve quality equal to higher-precision baselines while also training up to 33% faster.
Lay Summary: Large Language Model (LLM) training is very resource-intensive and expensive. Training typically uses 16-bit number formats and requires tuning knobs known as hyperparameters to achieve good performance. Using smaller, 8-bit formats like FP8 can make training faster, but is also more challenging as models get larger. Our paper presents a method, µnit Scaling (µS), that demonstrates simple, scalable FP8 training that works easily even at large model sizes. µS also keeps optimal hyperparameter values stable as model sizes get larger. We validate our method by training models from 1B to 13B parameters, performing all neural network computation in FP8, and obtaining higher quality models while training 33% faster.
Primary Area: Deep Learning->Large Language Models
Keywords: LLM, FP8, Transformer, Model Training, Attention
Submission Number: 15192
Loading