Abstract: Characterizing the confidence of machine learning predictions unlocks models that know when they do not know. In this study, we propose a framework for assessing the quality of predictive distributions obtained using deep learning models. The framework enables representation of aleatory and epistemic uncertainty, and relies on simulated data to generate different sources of uncertainty. Finally, it enables quantitative evaluation of the performance of uncertainty estimation techniques. We demonstrate the proposed framework with a case study highlighting the insights one can gain from using this framework.
0 Replies
Loading