Keywords: uncertainty estimation, prior networks, posterior networks, conjugate priors, classification, regression, evidential deep learning, dirichlet
Abstract: Popular approaches for quantifying predictive uncertainty in deep neural networks often involve a set of weights or models, for instance via ensembling or Monte Carlo Dropout. These techniques usually produce overhead by having to train multiple model instances or do not produce very diverse predictions. This survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit “what they don’t know” and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting. Finally, we survey the application of the same paradigm to regression problems. We also provide a reflection on the strengths and weaknesses of the mentioned approaches compared to existing ones and provide the most central theoretical results in order to inform future research.
One-sentence Summary: Survey on Evidential Deep Learning methods, that, compared to popular alternatives, provide uncertainty estimation in a single forward pass and model and allow to easily quantify distributional uncertainty for out-of-distribution inputs as well.
12 Replies
Loading