Amortized-Precision Quantization for Early Exiting in Vision Transformers

18 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Quantization, Early Exiting, Vision Transformers
TL;DR: We introduce Amortized-Precision Quantization (APQ) and its implementation MAQEE, which adaptively allocate precision with early exiting in Vision Transformers, achieving 95%+ compute reduction while maintaining accuracy.
Abstract: Vision Transformers (ViTs) achieve state-of-the-art results across classification, detection, and segmentation, but their heavy computation hinders deployment on resource-constrained devices. Quantization is a common technique to improve efficiency, yet conventional approaches assume static inference and ignore the input-dependent utility of layers under dynamic strategies such as Early Exiting (EE). This mismatch leads to inefficient bit allocation: shallow layers may be over-provisioned while deeper exits, which dominate late-stage decisions, remain under-optimized. We introduce **Amortized-Precision Quantization (APQ)**, a new perspective that treats precision as a utilization-dependent resource, exposing depth–precision and shallow-deep trade-offs. Building on APQ, we propose **Mutual Adaptive Quantization with Early Exiting (MAQEE)**, a bi-level optimization framework that jointly calibrates exit thresholds and reallocates bit-widths under risk control. We theoretically establish MAQEE's superiority over static quantization in dynamic inference, and empirically show that it reduces bit-operations by up to 95% while preserving accuracy, outperforming strong baselines by as much as 20% on ViT classification, detection, and segmentation benchmarks.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 10182
Loading