Path Integrals for the Attribution of Model UncertaintiesDownload PDF

29 Sept 2021 (modified: 04 May 2025)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: bayesian neural networks, path integrals, uncertainty attribution
Abstract: Understanding model uncertainties is of key importance in Bayesian machine learning applications. This often requires to meaningfully attribute a model's predictive uncertainty to its input features, however, popular attribution methods are primarily targeted at model scores for classification and regression tasks. Thus, in order to explain uncertainties, state-of-the-art alternatives commonly procure counterfactual feature vectors associated with low uncertainty and proceed by making direct comparisons. Here, we present a novel algorithm for uncertainty attribution in differentiable models, via path integrals which leverage in-distribution curves connecting feature vectors to counterfactual counterparts. We validate our method on benchmark image data sets with varying resolution, and demonstrate that (i) it produces meaningful attributions that significantly simplify interpretability over the existing alternatives and (ii) retains desirable properties from popular attribution methods.
One-sentence Summary: We present a novel algorithm for uncertainty attribution in Bayesian differentiable models via in-distribution path integrals, which significantly simplifies interpretability over existing alternatives.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/path-integrals-for-the-attribution-of-model/code)
9 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview