Measuring Calibration in Deep Learning

Anonymous

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Abstract: Overconfidence and underconfidence in machine learning classifiers is measured by calibration: the degree to which the probabilities predicted for each class match the accuracy of the classifier on that prediction. We propose two new measures for calibration, the Static Calibration Error (SCE) and Adaptive Calibration Error (ACE). These measures take into account every prediction made by a model, in contrast to the popular Expected Calibration Error.
  • Keywords: Deep Learning, Multiclass Classification, Classification, Uncertainty Estimation, Calibration
0 Replies

Loading