Estimation of Concept Explanations Should be Uncertainty Aware

Published: 27 Oct 2023, Last Modified: 27 Oct 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: Uncertainty aware estimation improves reliability of concept explanations
Abstract: Model explanations are very valuable for interpreting and debugging prediction models. We study a specific kind of global explanations called Concept Explanations, where the goal is to interpret a model using human-understandable concepts. Recent advances in multi-modal learning rekindled interest in concept explanations and led to several label-efficient proposals for estimation. However, existing estimation methods are unstable to the choice of concepts or dataset that is used for computing explanations. We observe that instability in explanations is because estimations do not model noise. We propose an uncertainty aware estimation method, which readily improved reliability of the concept explanations. We demonstrate with theoretical analysis and empirical evaluation that explanations computed by our method are stable to the choice of concepts and data shifts while also being label-efficient and faithful.
Submission Track: Full Paper Track
Application Domain: Computer Vision
Survey Question 1: We focused on enhancing the interpretation of prediction models using 'Concept Explanations,' which aim to explain models in terms of human-understandable concepts. We identified that existing methods often produce unstable explanations due to not accounting for noise. In our work, explainability is central as we introduce an uncertainty-aware method, ensuring that these explanations are reliable by considering uncertainty.
Survey Question 2: Previous Concept Explanation methods often suffer from instability, mainly because they don't account for noise, leading to explanations that might be inconsistent or unreliable. Thus, we propose a reliable Concept Explanation method to achieve a stable and trustworthy interpretation of models, which is easily understood by humans.
Survey Question 3: We focus on Concept Explanations and employ uncertainty-aware estimation methods, building upon techniques like CBM and TCAV.
Submission Number: 28
Loading