Counterfactual Graphical Models: Constraints and Inference

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Constraints and inference using graphical models for counterfactual reasoning
Abstract: Graphical models have been widely used as parsimonious encoders of constraints of the underlying probability models. When organized in a structured way, these models can facilitate the derivation of non-trivial constraints, the inference of quantities of interest, and the optimization of their estimands. In particular, causal diagrams allow for the efficient representation of structural constraints of the underlying causal system. In this paper, we introduce an efficient graphical construction called Ancestral Multi-world Networks that is sound and complete for reading counterfactual independences from a causal diagram using d-separation. Moreover, we introduce the counterfactual (ctf-) calculus, which can be used to transform counterfactual quantities using three rules licensed by the constraints encoded in the diagram. This result generalizes Pearl’s celebrated do-calculus from interventional to counterfactual reasoning.
Lay Summary: Counterfactual inference allows us to consider, given the factual outcome, how a situation may have evolved had we done something differently. Formally answering this kind of question from empirical data requires strong assumptions about the mechanism that generates the data. In this work, we examine probabilistic models found in the literature to define criteria and rules that allow us to infer counterfactual queries from data collected within the context of empirical sciences. Whenever a query cannot be inferred from these rules, it can be concluded that the assumptions in the model are not sufficient for the task; hence, further data or knowledge of the model are needed.
Primary Area: General Machine Learning->Causality
Keywords: Counterfactual inference, Graphical Models, Constraints
Submission Number: 8764
Loading