Enhancing Deep Neural Network Reliability with Refinement and Calibration

Published: 02 Mar 2026, Last Modified: 06 Mar 2026ICLR 2026 Trustworthy AIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Refinement, model reliability, supervised contrastive loss
TL;DR: Enhancing Deep Neural Network Reliability with Refinement and Calibration
Abstract: Although deep neural networks achieve high predictive accuracy, their confidence estimates are often unreliable, potentially compromising user trust in their decisions. This has driven research into *calibrated* models— where *calibration* measures the degree to which a model's predictive confidence matches the empirical probability of correctness. However, such a calibration metric can be changed by post-processing output to mimic the uncertainty of training time, without truly improving the model’s understanding. Hence, statisticians recommend a model to be *refined* as well as calibrated. Intuitively, a model is said to be more refined if there is a large difference in its predictive confidence for correct and incorrect predictions (sometimes called sharpness). We observe that common calibration methods often reduce model refinement. To address this, we propose: (1) a novel loss function that promotes refinement and is optimizable via supervised contrastive loss; (2) a unified training framework, RefCal, that jointly optimizes for calibration, refinement, and accuracy—enhancing DNN reliability.
Submission Number: 194
Loading