Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep Learning

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • TL;DR: We highlight the problems with common metrics of in-domain uncertainty and perform a broad study of modern ensembling techniques.
  • Abstract: Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.
  • Keywords: uncertainty, in-domain uncertainty, deep ensembles, ensemble learning, deep learning
  • Code: https://github.com/bayesgroup/pytorch-ensembles
  • Slides:  pdf
  • Spotlight Video: https://iclr.cc/virtual_2020/poster_BJxI5gHKDr.html
  • Original Pdf:  pdf
0 Replies

Loading