Abstract: Causal Shapley values take into account causal relations among dependent features to adjust the contributions of each feature to a prediction. A limitation of this approach is that it can only leverage known causal relations. In this work we combine the computation
of causal Shapley values with causal discovery, i.e., learning causal graphs from data. In particular, we compute causal explanations across the Markov Equivalence Class (MEC), a set of candidate causal graphs learned from observational data, providing a list of causal
Shapley values that explain the prediction. We propose two methods for estimating this list efficiently, drawing on the equivalences of the interventional distributions for a subset of the causal graphs. We evaluate our methods on synthetic and real-world data, showing that
they provide explanations that are more consistent with the true causal effects compared to traditional Shapley value approaches that disregard causal relations. Our results show that even when the Markov Equivalence Class is learned incorrectly, in most settings the explanations of our framework are on average closer to true causal Shapley values than marginal and conditional Shapley values.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Romain_Lopez1
Submission Number: 6573
Loading