Abstract: Post-training quantization (PTQ) is an efficient model compression technique that quantizes a pretrained full-precision model using only a small calibration set of unla-beled samples without retraining. PTQ methods for convo-lutional neural networks (CNNs) provide quantization re-sults comparable to full-precision counterparts. Directly applying them to vision transformers (ViTs), however, in-curs severe performance degradation, mainly due to the dif-ferences in architectures between CNNs and ViTs. In par-ticular, the distribution of activations for each channel vary drastically according to input instances, making PTQ meth-ods for CNNs inappropriate for ViTs. To address this, we in-troduce instance-aware group quantization for ViTs (IGQ-ViT). To this end, we propose to split the channels of acti-vation maps into multiple groups dynamically for each in-put instance, such that activations within each group share similar statistical properties. We also extend our scheme to quantize softmax attentions across tokens. In addition, the number of groups for each layer is adjusted to minimize the discrepancies between predictions from quantized and full-precision models, under a bit-operation (BOP) constraint. We show extensive experimental results on image classification, object detection, and instance segmentation, with various transformer architectures, demonstrating the effectiveness of our approach.
Loading