Prior and Posterior Networks: A Survey on Evidential Deep Learning Methods For Uncertainty Estimation

Published: 04 Apr 2023, Last Modified: 17 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Popular approaches for quantifying predictive uncertainty in deep neural networks often involve distributions over weights or multiple models, for instance via Markov Chain sampling, ensembling, or Monte Carlo dropout. These techniques usually incur overhead by having to train multiple model instances or do not produce very diverse predictions. This comprehensive and extensive survey aims to familiarize the reader with an alternative class of models based on the concept of Evidential Deep Learning: For unfamiliar data, they admit "what they don't know" and fall back onto a prior belief. Furthermore, they allow uncertainty estimation in a single model and forward pass by parameterizing distributions over distributions. This survey recapitulates existing works, focusing on the implementation in a classification setting, before surveying the application of the same paradigm to regression. We also reflect on the strengths and weaknesses compared to other existing methods and provide the most fundamental derivations using a unified notation to aid future research.
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=1HVpTXwZxK
Changes Since Last Submission: Since the last submission, we have made an effort to incorporate the feedback of the reviewers in order to improve our manuscript. Specifically, we have made the following large changes: - We dramatically extended the level of detail in the works we describe to give better insight into the methodology of surveyed works. This affects primarily sections 3 and 4, and was a concern shared by multiple reviewers and the Action Editor. - We extended the discussion comparing Evidential Deep Learning to more popular approaches to uncertainty quantification in section 6. - Based on the feedback of reviewer wKPZ, we improved the way we introduce concepts such as Bayesian Model Averaging, Evidential Deep Learning and the Dirichlet distribution in sections 2 and 3 to make them easier to follow. - Based on the input of reviewer 6GHq, we expanded the captions in Figure 2 (Figure 4 in the new version). - Based on the suggestion of reviewer wKPZ, we added an illustrating example based on the Iris dataset in section 2.4. - Following reviewer wKPZ, we made the definition of Evidential Deep Learning more explicit in section 2.3. - We added a note on epistemic uncertainty estimation in general based on the comment of reviewer jdKi. Alongside those changes, we also integrated a plethora of other smaller improvements and corrections, and updated the references with other works that have been published in the meantime. We would like to follow the action editor's suggestion and have the same editor and panel of reviewers be assigned to this re-submission, if possible.
Code: https://github.com/Kaleidophon/evidential-deep-learning-survey
Assigned Action Editor: ~Jasper_Snoek1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 720
Loading