Distance-Based Learning from Errors for Confidence Calibration

Anonymous

Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Deep neural networks (DNNs) are poorly-calibrated when trained in conventional ways. To improve confidence calibration of DNNs, we propose a novel training method, distance-based learning from errors (DBLE). DBLE bases its confidence estimation on distances in the representation space. We first adapt prototypical learning for training of a classification model for DBLE. It yields a representation space where a test sample's distance to its ground-truth class center can calibrate the model's performance. At inference, however, these distances are not available due to the lack of ground-truth label. To circumvent this by approximately inferring the distance for every test sample, we propose to train a confidence model jointly with the classification model, by merely learning from mis-classified training samples, which we show to be highly-beneficial for effective learning. On multiple data sets and DNN architectures, we demonstrate that DBLE outperforms alternative single-modal confidence calibration approaches. DBLE also achieves comparable performance with computationally-expensive ensemble approaches with lower computational cost and lower number of parameters.
  • Keywords: Confidence Calibration, Uncertainty Estimation, Prototypical Learning
  • Code: https://drive.google.com/open?id=1UThGvkkvFvKX8ogsfwvdA3uY8xzDlIuL
0 Replies

Loading