TL;DR: This work introduces the first FP4 training framework for LLMs, achieving accuracy comparable to BF16 and FP8 with minimal accuracy degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens.
Abstract: The growing computational demands of training large language models (LLMs) necessitate more efficient methods. Quantized training presents a promising solution by enabling low-bit arithmetic operations to reduce these costs. While FP8 precision has demonstrated feasibility, leveraging FP4 remains a challenge due to significant quantization errors and limited representational capacity. This work introduces the first FP4 training framework for LLMs, addressing these challenges with two key innovations: a differentiable quantization estimator for precise weight updates and an outlier clamping and compensation strategy to prevent activation collapse. To ensure stability, the framework integrates a mixed-precision training scheme and vector-wise quantization. Experimental results demonstrate that our FP4 framework achieves accuracy comparable to BF16 and FP8, with minimal degradation, scaling effectively to 13B-parameter LLMs trained on up to 100B tokens. With the emergence of next-generation hardware supporting FP4, our framework sets a foundation for efficient ultra-low precision training.
Lay Summary: Training today’s powerful AI language models takes enormous computing power, money, and time. As models get bigger, this challenge only grows. One way to make training more efficient is to use simpler numbers—fewer bits—to do the math. But using very small numbers, like 4-bit formats, often makes the models less accurate.
To solve this, we created a new training method that lets models use 4-bit numbers without losing performance. We designed smart tricks to help the model handle tiny numbers better—by improving how it learns from data and by managing unusual spikes in values during training.
Our tests show that models trained with this method perform almost as well as those trained with much more complex numbers. This means we can train powerful models faster, cheaper, and with less energy. As future hardware gets better at handling 4-bit math, this approach could help make advanced AI more accessible and sustainable.
Link To Code: https://aka.ms/MS.AMP
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Machine Learning, Quantization, FP4, Quantized Training
Submission Number: 3788
Loading