Keywords: quantization, vision encoder, vision transformer
TL;DR: We show that outliers in vision encoders share similar components across images. By caching them as prefix tokens, attention sinks are mitigated and outlier tokens can be removed, resulting in improved quantization performance.
Abstract: Transformer-based vision encoders---such as CLIP---are central to multimodal intelligence, powering applications from autonomous web agents to robotic control. Since these applications often demand real-time processing of massive visual data, reducing the inference cost of vision encoders is critical. Post-training quantization offers a practical path, but remains challenging even at 8-bit precision due to massive-scale activations (i.e., outliers). In this work, we propose \textit{RegCache}, a training-free algorithm to mitigate outliers in vision encoders, enabling quantization with significantly smaller accuracy drops. The proposed RegCache introduces outlier-prone yet semantically meaningless prefix tokens to the target vision encoder, which prevents other tokens from having outliers. Notably, we observe that outliers in vision encoders behave differently from those in language models, motivating two technical innovations: middle-layer prefixing and token deletion. Experiments show that our method consistently improves the accuracy of quantized models across both text-supervised and self-supervised vision encoders.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 5034
Loading