Layer Importance Estimation With Imprinting for Neural Network QuantizationDownload PDFOpen Website

2021 (modified: 17 Nov 2022)CVPR Workshops 2021Readers: Everyone
Abstract: Neural network quantization has achieved a high compression rate using a fixed low bit-width representation of weights and activations while maintaining the accuracy of the high-precision original network. However, mixed-precision (per-layer bit-width precision) quantization requires careful tuning to maintain accuracy while achieving further compression and higher granularity than fixed-precision quantization. We propose an accuracy-aware criterion to quantify the layer's importance rank. Our method applies imprinting per layer which acts as a proxy module for accuracy estimation in an efficient way. We rank the layers based on the accuracy gain from previous modules and iteratively quantize first those with less accuracy gain. Previous mixed-precision methods either rely on expensive search techniques such as reinforcement learning (RL) or end-to-end optimization with a lack of interpretation to the quantization configuration scheme. Our method is a one-shot, efficient, accuracy-aware information estimation and thus draws better interpretability to the selected bit-width configuration.
0 Replies

Loading