Beyond Confidence: Reliable Models Should Also Quantify AtypicalityDownload PDF

Published: 04 Mar 2023, Last Modified: 28 Mar 2023ICLR 2023 Workshop on Trustworthy ML OralReaders: Everyone
Keywords: trustworthy machine learning, reliable machine learning, uncertainty, calibration
TL;DR: We find that predictions for atypical (or rare) points are more miscalibrated and overconfident; thus, we propose that models should quantify atypicality, and show that simple atypicality-aware uncertainty quantification provides significant value.
Abstract: While most machine learning models can provide confidence in their predictions, confidence is insufficient to understand and use the model's uncertainty reliably. For instance, the model may have a low confidence prediction for a sample that is far from the training distribution or is inherently ambiguous. In this work, we investigate the relationship between how atypical~(or rare) a sample is and the reliability of a model's confidence for this sample. First, we show that atypicality can predict miscalibration. In particular, we empirically show that predictions for atypical examples are more miscalibrated and overconfident, and support our findings with theoretical insights. Using these insights, we show how being atypicality-aware improves uncertainty quantification. Finally, we give a framework to improve decision-making and show that the atypicality framework improves selectively reporting uncertainty sets. Given these insights, we propose that models should be equipped not only with confidence but also with an atypicality estimator for reliable uncertainty quantification. Our results demonstrate that simple post-hoc atypicality estimators can provide significant value.
0 Replies

Loading