Keywords: LLM Quantization, LLM Compression, Post Training Quantization
TL;DR: This paper presents ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for low-bit weight-only quantization.
Abstract: The rapid deployment of Large Language Models (LLMs) highlights the need for efficient low-bit post-training quantization (PTQ) due to their high memory costs. A key challenge in weight quantization is the presence of outliers, which inflate quantization ranges and lead to large errors. While a number of outlier suppression techniques have been proposed, they either: fail to effectively shrink the quantization range, or incur (relatively) high bit overhead. In this paper, we present ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for outlier-aware weight-only quantization. Compared to existing outlier suppression techniques requiring $\approx 1$ bit overhead to halve the quantization range, ICQuant requires only $\approx 0.25$ bits; a significant saving in low bit regimes (e.g., 2-3 bits). ICQuant can be used on top of any existing quantizers to eliminate outliers, improving the quantization quality. Using just 2.3 bits per weight and simple scalar quantizers, \ours improves the zero-shot accuracy of the 2-bit Llama3-70B model by up to 130\% and 150\% relative to QTIP (Tseng et al. (2024b)) and QuIP\# (Tseng et al. (2024a) respectively; and it achieves comparable performance to the best-known fine-tuned quantizer (Malinovskii et al. (2024)) without any fine-tuning.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 715
Loading