TL;DR: This paper examines floating-point quantization in LLM training, proposes a unified scaling law, and offers key recommendations for improving model performance and cost-efficiency.
Abstract: Low-precision training is considered an effective strategy for reducing both training and downstream inference costs. Previous scaling laws for precision mainly focus on integer quantization, which pay less attention to the constituents in floating-point (FP) quantization, and thus cannot well fit the LLM losses in this scenario. In contrast, while FP quantization training is more commonly implemented in production, it's research has been relatively superficial. In this paper, we thoroughly explore the effects of FP quantization targets, exponent bits, mantissa bits, and the calculation granularity of the scaling factor in FP quantization training performance of LLM models.
In addition to an accurate FP quantization unified scaling law, we also provide valuable suggestions for the community: (1) Exponent bits contribute slightly more to the model performance than mantissa bits. We provide the optimal exponent-mantissa bit ratio for different bit numbers, which is available for future reference by hardware manufacturers; (2) We discover the formation of the critical data size in low-precision LLM training. Too much training data exceeding the critical data size will inversely bring in degradation of LLM performance; (3) The optimal FP quantization precision is directly proportional to the computational power, but within a wide computational power range. We estimate that the best cost-performance precision should lie between 4-8 bits.
Lay Summary: Low-precision training (using fewer bits to represent numbers) can reduce the cost of training and running AI models like large language models (LLMs). Most prior work focused on integer-based low-precision methods, but real-world systems often use floating-point (FP) quantization, where numbers are split into “exponent” and “mantissa” parts. This paper explores how different FP settings—like how many bits to assign to the exponent versus mantissa, or how to scale numbers—affect LLM performance.
We propose the **Capybara Scaling Law** for float quantized training that could precisely predict the model loss related to
the data size, model size, exponent, mantissa, and block size of scaling factors. Key insights include: (1) Exponent bits slightly matter more than mantissa bits, and the optimal balance between them depends on total bits used—a guide for hardware designers. (2) Training with too much low-precision data can harm performance once a “critical data size” is exceeded. (3) The best precision (4-8 bits) balances cost and performance across most hardware setups.
These findings help engineers design efficient systems for training AI models and suggest that pushing for ultra-low precision (e.g., 1-3 bits) might not be worth the tradeoffs in accuracy. Instead, moderate precision (4-8 bits) offers the best value for computational resources.
Primary Area: Deep Learning->Large Language Models
Keywords: Floating-point quantization, Scaling law, Training performance
Submission Number: 128
Loading