Generating Explanations From Linear Structural Causal Models

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: causality, explanations
Abstract: Causality and explainability are intertwined in that they mutually inform each other. For instance, incorporating knowledge on the causal structure of the data into an explanation aligns the reasoning within said explanation with how the data was generated. Surprisingly, this conceptual idea of generating explanations mainly from a suitable causal representation, like Pearl's Structural Causal Model, has not been studied before. To this end we are going to present a first algorithm within this new type of explanation that takes an SCM as input. We start by identifying desiderata for this new approach by discussing shortcomings of previous causal explainers. Our current key restriction are linear SCM, for which we then define the set of possible questions before deriving the actual algorithm step-by-step alongside an example. To better understand whether our so-called Structural Causal Explanations are sensible w.r.t. the initial desiderata we asked 22 study participants to provide their guess of causal relations on simple, every-day variables to then evaluate SCE on these SCM approximations. We find that SCE is a suitable explanation scheme and followup our empirical study of SCE with SCM approximations as discovered by popular graph learning algorithms. In this second experiment we find that SCE reveals defficiencies of current graph learning algorithms for which we then propose a naïve regularizer that incorporates SCE into learning.
Primary Area: visualization or interpretation of learned representations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5939
Loading