An Underexplored Dilemma between Confidence and Calibration in Quantized Neural NetworksDownload PDF

Published: 18 Oct 2021, Last Modified: 05 May 2023ICBINB@NeurIPS2021 PosterReaders: Everyone
Keywords: post-training quantization,confidence,deep learning,convolutional neural network,imagenet,cifar100,low precision,resnet,calibration,mobilenet,resnet
TL;DR: We explore the novel insight that model calibration can help explain the robustness of CNN accuracy to post-training quantization
Abstract: Modern convolutional neural networks (CNNs) are known to be overconfident in terms of their calibration on unseen input data. That is to say, they are more confident than they are accurate. This is undesirable if the probabilities predicted are to be used for downstream decision making. When considering accuracy, CNNs are also surprisingly robust to compression techniques, such as quantization, which aim to reduce computational and memory costs. We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence. This is due to an intuitive result: low confidence predictions are more likely to change post-quantization, whilst being less accurate. High confidence predictions will be more accurate, but more difficult to change. Thus, a minimal drop in post-quantization accuracy is incurred. This presents a potential conflict in neural network design: worse calibration from overconfidence may lead to better robustness to quantization. We perform experiments applying post-training quantization to a variety of CNNs, on the CIFAR-100 and ImageNet datasets, and make our code publicly available.
Category: Criticism of default practices: I would like to question some well-spread practice in the community, Other (please specify)
Category Description: We explore an area overlooked by default practices in quantization, which leads to a negative result
1 Reply

Loading