Abstract: Deep neural network (DNN) based image classifiers have successfully applied to various scenes but suffered from severe overconfidence issues, essential to safety-critical applications. In recent years, plenty of research has focused on probabilistic calibration to reduce the risks from overconfident predictions. Recent work presents that it is harder to further calibrate the models calibrated by regularization techniques like label smoothing. We extend the above study to the corrupted dataset setting which is frequently encountered in the image capture processing, such as light condition, weather condition, or the quality of devices. Interestingly, we discover that post-hoc method like temperature scaling (TS) would hurt the calibration performance of original models under corrupted shift of CIFAR-10, CIFAR-100, and TinyImageNet, and we call this phenomenon Negative Calibration (NC). We observe that NC occurs in the decreasing of output entropy when the post-hoc method is applied to the pre-trained model, and we take ResNet-18 CIFAR-10 as an example to understand NC from the perspective of entropy.
Loading