FracQuant: Rank-Adaptive Image Super-resolution Network Quantization via Fractal Complexity Assessment

30 Apr 2026 (modified: 09 May 2026)ICML 2026 Workshop CoLoRAI SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Network Quantization, Image Super-resolution, Fractal Analysis, Post-training Quantization
TL;DR: FracQuant uses fractal dimension as a rank proxy for SR quantization, enabling stable rank-adaptive bit allocation and the first successful W2A2 quantization on SR networks, calibrated in under 75 seconds.
Abstract: Deploying image super-resolution (SR) models on edge devices necessitates aggressive quantization. However, current content-aware methods rely on gradient-based complexity estimation which often fails in both smooth and heavily textured regions. These failures lead to unstable bit assignments and denormalization overflow at extreme low-bit settings. The fundamental issue is that gradient magnitudes do not convey information about the intrinsic rank of local image representations, which is the primary quantity governing regional compressibility. We propose FracQuant, a rank-adaptive SR quantization framework that estimates local intrinsic rank through fractal complexity assessment. Fractal dimension provides a geometry-driven and activation-insensitive proxy for rank, producing a complexity signal that is 30 times more stable than gradient-based alternatives. By integrating geometric self-similarity assessment and Dynamic Hybrid soft-hard Labeling (DHL), FracQuant achieves principled rank-adaptive bit allocation within a single-pass post-training quantization (PTQ) pipeline. Experimental results demonstrate that FracQuant matches or exceeds the performance of quantization-aware training (QAT) methods under standard W4A4 and W8A6 settings without requiring ground-truth supervision. Furthermore, FracQuant is the first method to maintain stability for W2A2 quantization on SR networks, a configuration where previous methods consistently fail.
Submission Number: 18
Loading