Keywords: causal abstraction, causal discovery, interventional data, partition refinement lattice
TL;DR: We learn abstracted causal DAGs (*coarsenings*) from interventional data via the partition refinement lattice.
Abstract: Directed acyclic graphical (DAG) models are a powerful tool for representing causal relationships among jointly distributed random variables, especially concerning data from across different experimental settings.
However, it is not always practical or desirable to estimate a causal model at the granularity of given features in a particular dataset.
There is a growing body of research on *causal abstraction* to address such problems.
We contribute to this line of research by
(i) providing novel graphical identifiability results for practically-relevant interventional settings,
(ii) proposing an efficient, provably consistent algorithm for directly learning abstract causal graphs from interventional data with unknown intervention targets, and
(iii) uncovering theoretical insights about the lattice structure of the underlying search space, with connections to the field of causal discovery more generally.
As proof of concept, we apply our algorithm on synthetic and real datasets with known ground truths, including measurements from a controlled physical system with interacting light intensity and polarization.
Pmlr Agreement: pdf
Submission Number: 81
Loading