Getting a CLUE: A Method for Explaining Uncertainty EstimatesDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 OralReaders: Everyone
Keywords: interpretability, uncertainty, explainability
Abstract: Both uncertainty estimation and interpretability are important factors for trustworthy machine learning systems. However, there is little work at the intersection of these two areas. We address this gap by proposing a novel method for interpreting uncertainty estimates from differentiable probabilistic models, like Bayesian Neural Networks (BNNs). Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold, such that a BNN becomes more confident about the input's prediction. We validate CLUE through 1) a novel framework for evaluating counterfactual explanations of uncertainty, 2) a series of ablation experiments, and 3) a user study. Our experiments show that CLUE outperforms baselines and enables practitioners to better understand which input patterns are responsible for predictive uncertainty.
One-sentence Summary: We introduce a method to help explain uncertainties of any differentiable probabilistic model by perturbing input features.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![Papers with Code](/images/pwc_icon.svg) 3 community implementations](https://paperswithcode.com/paper/?openreview=XSLF1XFq5h)
21 Replies

Loading