Keywords: LLM, Quantization, Mixed-precision
Abstract: Quantization has become one of the most effective methodologies to compress LLMs into smaller size.
However, the existing quantization solutions still show limitations of either non-negligible accuracy drop or system inefficiency.
In this paper, we make a comprehensive
analysis of the general quantization principles on their effect to the triangle of accuracy, memory consumption and system efficiency.
We propose MixLLM that explores the new optimization space of mixed-precision quantization between output features based on the insight that different output features matter differently in the model.
MixLLM identifies the output features with high salience in the global view rather than within each single layer,
effectively assigning the larger bit-width to output features that need it most to achieve good accuracy with low memory consumption.
We present the sweet spot of quantization configuration of algorithm-system co-design that lead to high accuracy and system efficiency.
To address the system challenge of this sweet spot, we design the two-step dequantization to make use of the int8 Tensor Core easily and fast data type conversion to reduce dequantization overhead significantly.
Extensive experiments show that MixLLM achieves the best accuracy on a variety of tasks for the popular LLMs than a set of state-of-the-art works.
It shows 0.31 lower perplexity and 0.43\% improvement on zero shot tasks for Llama 3 8B than QoQ, with similar memory consumption and system efficiency.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9699
Loading