On the Relationship Between Explanation and Prediction: A Causal View

Published: 27 Oct 2023, Last Modified: 09 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: Using causal inference, we explored how model settings affect explanations, finding that their link to predictions changes with model performance.
Abstract: Explainability has become a central requirement for the development, deployment, and adoption of machine learning (ML) models and we are yet to understand what explanation methods can and cannot do. Several factors such as data, model prediction, hyperparameters used in training the model, and random initialization can all influence downstream explanations. While previous work empirically hinted that explanations (E) may have little relationship with the prediction (Y), there is a lack of conclusive study to quantify this relationship. Our work borrows tools from causal inference to systematically assay this relationship. More specifically, we measure the relationship between E and Y by measuring the treatment effect when intervening on their causal ancestors (hyperparameters) (inputs to generate saliency-based Es or Ys). We discover that Y's relative direct influence on E follows an odd pattern; the influence is higher in the lowest-performing models than in mid-performing models, and it then decreases in the top-performing models. We believe our work is a promising first step towards providing better guidance for practitioners who can make more informed decisions in utilizing these explanations by knowing what factors are at play and how they relate to their end task.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: XAI Evaluation Metrics
Survey Question 1: In our study, we explored how different factors, like model settings, influence the explanations given by machine learning models. Using methods from causal inference, we found that the relationship between these explanations and the model's predictions changes based on the model's performance level. Our findings aim to guide users in understanding and using these explanations more effectively in real-world applications.
Survey Question 2: We incorporated explainability because understanding machine learning model decisions is crucial for their real-world adoption and trust. Without explainability, users might blindly trust or mistrust model outputs, potentially leading to unintended consequences or missed opportunities. Additionally, unexplained models can hinder practitioners from refining or correcting them effectively.
Survey Question 3: In our work, we focused on interventions on the causal ancestors of model explanations, specifically inputs used to generate saliency-based explanations.
Submission Number: 85