Keywords: Explainable Artificial Intelligence, XAI, Strucutre Learning
TL;DR: We unify explainable AI with causal modeling by establishing conditions under which XAI feature importance coincides with causality. We then propose a new XAI model that outperforms benchmark methods in experiments.
Abstract: Causal relations are typically modeled between random variables (RVs), yet in real-world settings, it is events that cause other events, not RVs causing RVs. We formalize this perspective as Event-Level Causality (ELC), under which Bayesian network structure learning and many Explainable AI (XAI) methods can be viewed as special cases. ELC increases the flexibility of causal modeling by capturing dependencies beyond classical structure learning. It also strengthens XAI by rigorously linking feature importance to causality and showing that different XAI models approximate a principled objective function with varying degrees of fidelity. We propose a new approximation of this objective that, in experiments, clearly outperforms benchmarks such as LIME, L2X, SHAP, and INVASE.
Primary Area: interpretability and explainable AI
Submission Number: 17159
Loading