Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration

Published: 21 Jul 2024, Last Modified: 21 Jul 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural network calibration is an essential task in deep learning to ensure consistency between the confidence of model prediction and the true correctness likelihood. In this paper, we propose a new post-processing calibration method called $\textbf{Neural Clamping}$, which employs a simple joint input-output transformation on a pre-trained classifier via a learnable universal input perturbation and an output temperature scaling parameter. Moreover, we provide theoretical explanations on why Neural Clamping is provably better than temperature scaling. Evaluated on BloodMNIST, CIFAR-100, and ImageNet image recognition datasets and a variety of deep neural network models, our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods. The code is available at github.com/yungchentang/NCToolkit, and the demo is available at huggingface.co/spaces/TrustSafeAI/NCTV.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have made minor changes based on the reviewers' suggestions. We added the code link and demo link in the abstract. According to Reviewer pRL2's suggestion, we highlighted in Section 4.3 that theoretically derived input perturbation can achieve performance similar to that of training results. Based on Reviewer 7Ruf's suggestion, we explained the reasons for such an experimental setup in Appendix G.
Code: https://github.com/yungchentang/NCToolkit
Supplementary Material: pdf
Assigned Action Editor: ~Bruno_Loureiro1
Submission Number: 2601
Loading