Press ECCS to Doubt (Your Causal Graph)

Published: 01 Jan 2024, Last Modified: 15 May 2025GUIDE-AI@SIGMOD 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Techniques from the theory of causality have seen extensive use in natural and social sciences, since they allow scientists to explicitly model assumptions and draw quantitative causal conclusions. More recently, causality has also gathered interest in many computer science sub-fields, including machine learning and systems. A causal model is usually represented as a causal graph, often automatically discovered from available data. For problems for which running a full constraint-based causal discovery algorithm and correctly orienting all edges is computationally intractable, automatically generated causal graphs are prone to error, calling for expensive manual graph verification. Understanding which parts of a causal graph have the largest impact on downstream results is essential for expediting this graph verification process.In this work, we present ECCS – a framework for Exposing Critical Causal Structures within a causal graph, with respect to a given Average Treatment Effect (ATE) calculation. We formalize the Interactive Causal Graph Verification problem, in which user judgments about edges in the causal graph are solicited sequentially, with the goal of minimizing the absolute error in the ATE of interest (without advance access to its ground-truth value). We present three algorithms to solve this problem. Based on a preliminary evaluation, our best-performing algorithm, AdjSetEdit, can solicit a sequence of 10 user judgments that outperforms a randomized such sequence by more than 60%, with time complexity linear in the number of data points and polynomial in the number of variables.
Loading