Generating Locally Relevant Explanations Using Causal Rule Discovery

Published: 01 Jan 2024, Last Modified: 14 Nov 2024FUZZ 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the real-world an effect often arises via multiple causal mechanisms. Conversely, the behaviour of AI systems is commonly driven by correlations which may-or may not-be themselves linked to causal mechanisms in the associated real-world system they are modelling. From an AI and XAI point of view, it is desirable for AI systems to model and communicate primarily, if not exclusively, causal mechanisms between variables, affording strong generalisation performance and effective explanations. Indeed, as we discuss in this paper, it is critical for explanations for a given effect not only to reflect possible causal mechanisms, but to highlight the specific causal mechanisms which led to the effect in the given instance. In this light, we proceed to propose a rule generation framework which generates rules for fuzzy systems that capture possible causal mechanisms between the input variables and the target variable as discovered by data-driven causal discovery algorithms for the given data set. For a given sample, i.e., a specific set of inputs, the obtained fuzzy system provides local explanations which distinguish the locally relevant causal mechanism(s) of its effect from other possible, but not applicable causal mechanisms, and thus avoids both overly simplistic single-cause and exhaustive-potentially misleading explanations. Experiments show that the fuzzy systems obtained by the proposed framework achieve comparable performance compared to classical correlation-based approaches, and provide local explanations which indicate the specific causal mechanism for different effects.
Loading