Enhancing Deep Neural Network Reliability with Refinement and Calibration
Track: long paper (up to 10 pages)
Domain: machine learning
Abstract: Although deep neural networks (DNNs) achieve high predictive accuracy, their confidence estimates are often unreliable, potentially compromising user trust in their decisions. This has driven research into calibrated models— where calibration measures the degree to which a model's predictive confidence matches the empirical probability of correctness. However, such a calibration metric can be changed by post-processing output to mimic the uncertainty of training time, without truly improving the model’s understanding. Hence, statisticians recommend a model to be refined as well as calibrated. Intuitively, a model is said to be more refined if there is a large difference in its predictive confidence for correct and incorrect predictions (sometimes called sharpness). Calibration and refinement improve the statistical alignment between model uncertainty and real-world outcomes. We observe that common calibration methods often reduce model refinement. To address this, we propose: (1) a novel loss function that promotes refinement and is optimizable via supervised contrastive loss; (2) a unified training framework, RefCal, that jointly optimizes for calibration, refinement, and accuracy—enhancing DNN reliability. For eg., we report (accuracy$\uparrow$, refinement$\uparrow$, ECE$\downarrow$) of (58.81, 95.67, 0.08) on CIFAR-100-LT dataset (10\% class imbalance), surpassing (46.27, 93.7, 0.22) by the well known Correctness Ranking Loss .
Presenter: ~Ramya_Hebbalaguppe1
Submission Number: 33
Loading