Unified Uncertainty Estimation

20 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: uncertainty estimation, calibration, epistemic, aleatoric
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: new approach to combine aleatoric and epistemic uncertainty, leading to calibrated classifications and rejections
Abstract: In order to build robust, fair, and safe AI systems, we would like our classifiers to recognize and say “I don’t know” when facing test examples that do not belong to any of the in-domain classes observed during training. Perhaps surprisingly, the ubiquitous strategy to predict under uncertainty is the simplistic reject-or-classify rule: abstain from prediction if epistemic uncertainty is high, classify otherwise. We argue that this recipe has several problems: it does not allow different sources of uncertainty to communicate with each other, produces miscalibrated predictions, and it does not allow to correct for misspecifications in our uncertainty estimates. To address these issues, we introduce unified uncertainty calibration (U2C), a framework for the unified, non-linear calibration of aleatoric and epistemic uncertainties. Unified uncertainty calibration enables a clean analysis of uncertainty estimation via learning theory, and significantly outperforms reject-or-classify across a variety of standard benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2417
Loading