CondiQuant: Condition Number Based Low-Bit Quantization for Image Super-Resolution

03 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Low-bit quantization, super-resolution, post training quantization
TL;DR: A post-training quantization method for super-resolution models leveraging condition number.
Abstract: Low-bit model quantization for image super-resolution (SR) is a longstanding task which is renowned for its surprising compression and acceleration ability. However, accuracy degradation is inevitable when compressing the full-precision (FP) model to ultra-low bit widths ($2\sim4$ bits). Experimentally, we observed the degradation of quantization is mainly attributed to the quantization of activation instead of model weights. In numerical analysis, the condition number of weights could measure how much the output value of the function can change for a small change in the input argument, inherently reflecting the quantization error. Therefore, we propose CondiQuant, a condition number-based low-bit post-training quantization for image super-resolution. Specifically, we design an efficient proximal gradient descent algorithm to reduce the condition number of weights while keeping the output as unchanged as possible. With comprehensive experiments, we demonstrate that CondiQuant outperforms existing state-of-the-art PTQ methods in accuracy without computation overhead and gains the theoretically optimal compression ratio in model parameters. Our code will be released soon.
Supplementary Material: pdf
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 1518
Loading