Measuring Calibration in Deep LearningDownload PDF

Sep 25, 2019 (edited Dec 24, 2019)ICLR 2020 Conference Blind SubmissionReaders: Everyone
  • Original Pdf: pdf
  • Abstract: Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error.
  • Keywords: Deep Learning, Multiclass Classification, Classification, Uncertainty Estimation, Calibration
5 Replies

Loading