TL;DR: Position: Explainable AI is Causal Discovery in Disguise
Abstract: Explainable AI (XAI) has intrigued researchers since the earliest days of artificial intelligence. However, with the surge in AI-based applications—especially deep neural network models—the complexity and opacity of AI models have intensified, renewing the call for explainability. As a result, an overwhelming number of methods have been introduced, reaching a point where surveys now summarize other surveys on XAI. Yet, significant challenges persist, including unresolved debates on accuracy-explainability tradeoffs, conflicting evaluation metrics, and repeated failures in sanity checks. Further complications arise from fairness violations, robustness issues, privacy concerns, and susceptibility to manipulation. While there’s broad agreement on the importance of XAI, expert panels and major conferences continue to reveal that the only consensus on how to achieve it is a lack of one. This has led some to question whether the discord stems from a fundamental absence of ground truth for defining “the” correct explanation.
This position paper argues that explainable AI is, in fact, a supervised problem—albeit with a target rooted in a profound, often elusive, understanding of reality. In this sense, XAI is causal discovery in disguise. By reframing XAI queries as causal inquiries—whether about data, models, or decisions—we prove the necessity and sufficiency of causal models for XAI, encouraging community convergence around advanced methods for concept and causal discovery, potentially through interactive, approximate causal inference. We contend that without such a model, XAI remains limited by its lack of ground truth, keeping us entrenched in uncertainty.
Primary Area: Model Understanding, Explainability, Interpretability, and Trust
Keywords: explainable ai, causal inference, mechanistic interpretability
Submission Number: 148
Loading