Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep LearningDownload PDF

Sep 25, 2019 (edited Jul 17, 2020)ICLR 2020 Conference Blind SubmissionReaders: Everyone
  • Spotlight Video: https://iclr.cc/virtual_2020/poster_BJxI5gHKDr.html
  • Original Pdf: pdf
  • Slides: pdf
  • TL;DR: We highlight the problems with common metrics of in-domain uncertainty and perform a broad study of modern ensembling techniques.
  • Abstract: Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty estimation is one of the main benchmarks for assessment of ensembling performance. At the same time, deep learning ensembles have provided state-of-the-art results in uncertainty estimation. In this work, we focus on in-domain uncertainty for image classification. We explore the standards for its quantification and point out pitfalls of existing metrics. Avoiding these pitfalls, we perform a broad study of different ensembling techniques. To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE) and show that many sophisticated ensembling techniques are equivalent to an ensemble of only few independently trained networks in terms of test performance.
  • Keywords: uncertainty, in-domain uncertainty, deep ensembles, ensemble learning, deep learning
  • Code: https://github.com/bayesgroup/pytorch-ensembles
12 Replies