Abstract: Deep neural networks (DNNs) are capable of nonlinear learning and achieve high accuracy in many machine learning tasks. However, their performance could degrade due to noisy data, insufficient data amount, or data distributional shifts. In this work, we propose a novel analysis approach to interpret DNNs performance degradation by employing Bayesian uncertainty estimates. In essence, we derive a new probabilistic modeling method to separate and measure three major sources that cause performance degradation, namely data uncertainty, model uncertainty and distributional uncertainty. By performing sensitivity analysis on the decomposed uncertainty estimates, we can analyze which features influence each type of uncertainty. We validate and demonstrate the usefulness of our analysis on real-world datasets in both classification and regression tasks. Our method improves the interpretability of black-box DNNs.
Loading