Revisiting the Calibration of Modern Neural NetworksDownload PDF

May 21, 2021 (edited Nov 05, 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: uncertainty, calibration, image classification
  • TL;DR: We study how model size, architecture and training affect calibration and show that current SOTA models do not follow past trends.
  • Abstract: Accurate estimation of predictive uncertainty (model calibration) is essential for the safe application of neural networks. Many instances of miscalibration in modern neural networks have been reported, suggesting a trend that newer, more accurate models produce poorly calibrated predictions. Here, we revisit this question for recent state-of-the-art image classification models. We systematically relate model calibration and accuracy, and find that the most recent models, notably those not using convolutions, are among the best calibrated. Trends observed in prior model generations, such as decay of calibration with distribution shift or model size, are less pronounced in recent architectures. We also show that model size and amount of pretraining do not fully explain these differences, suggesting that architecture is a major determinant of calibration properties.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/google-research/robustness_metrics/tree/master/robustness_metrics/projects/revisiting_calibration
8 Replies

Loading