ResQ: Mixed-Precision Quantization of Large Language Models with Low-Rank Residuals

Published: 05 Mar 2025, Last Modified: 14 Apr 2025SCOPE - ICLR 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Track: Main paper track (up to 5 pages excluding references and appendix)
Keywords: quantization, weight quantization, activation quantization, kv cache quantization, llm quantization, efficient inference
TL;DR: We perform weight/activation/kv cache quantization of large language models to 4-bit while keeping only 1/8 channels in 8-bit.
Abstract: Quantizing weights, activations, and KV cache in large language models to 4-bit without degrading generalizability is challenging due to outlier-induced activation quantization errors. We propose ResQ, a PTQ method that uses principal component analysis to identify a low-rank subspace (in practice 1/8 of the hidden dimension) and keeps coefficients within this subspace in 8-bit while quantizing the rest in 4-bit. Within each subspace, invariant random rotation is applied to further suppress outliers. ResQ outperforms recent PTQ methods on Llama and Qwen2.5, achieving up to 33% lower Wikitext perplexity than SpinQuant and up to 3x speedup over 16-bit.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 63
Loading