Abstract: We address the problem of uncertainty calibration.
While standard deep neural networks typically yield uncalibrated predictions, calibrated confidence scores that are
representative of the true likelihood of a prediction can be
achieved using post-hoc calibration methods. However, to
date, the focus of these approaches has been on in-domain
calibration. Our contribution is two-fold. First, we show
that existing post-hoc calibration methods yield highly overconfident predictions under domain shift. Second, we introduce a simple strategy where perturbations are applied to
samples in the validation set before performing the post-hoc
calibration step. In extensive experiments, we demonstrate
that this perturbation step results in substantially better calibration under domain shift on a wide range of architectures
and modelling tasks.
0 Replies
Loading