Path Integrals for the Attribution of Model UncertaintiesDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: bayesian neural networks, path integrals, uncertainty attribution
Abstract: Understanding model uncertainties is of key importance in Bayesian machine learning applications. This often requires to meaningfully attribute a model's predictive uncertainty to its input features, however, popular attribution methods are primarily targeted at model scores for classification and regression tasks. Thus, in order to explain uncertainties, state-of-the-art alternatives commonly procure counterfactual feature vectors associated with low uncertainty and proceed by making direct comparisons. Here, we present a novel algorithm for uncertainty attribution in differentiable models, via path integrals which leverage in-distribution curves connecting feature vectors to counterfactual counterparts. We validate our method on benchmark image data sets with varying resolution, and demonstrate that (i) it produces meaningful attributions that significantly simplify interpretability over the existing alternatives and (ii) retains desirable properties from popular attribution methods.
One-sentence Summary: We present a novel algorithm for uncertainty attribution in Bayesian differentiable models via in-distribution path integrals, which significantly simplifies interpretability over existing alternatives.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2107.08756/code)
9 Replies

Loading