Keywords: Causality, Causal abstraction, Causal representation, PCA
TL;DR: We propose an interpretable algorithm for linear constructive causal reduction of low-level causal models into high-level ones.
Abstract: Causal abstraction aims at mapping a complex causal model into a simpler ("reduced") one. Causal consistency constraints have been established to link the initial "low-level" model to its "high-level" counterpart, and identifiability results for such mapping can be established when we have access to some information about high-level variables. In contrast, we study the problem of learning a causal abstraction in an *unsupervised* manner, that is, when we do not have any information on the high-level causal model. In such setting, there typically exists multiple causally consistent abstractions, and we need to put additional constraints to unambiguously select a high-level model. To achieve this, we supplement a Kullback-Leibler-divergence-based consistency loss with a projection loss, which aims at finding the causal abstraction that best captures the variations of the low-level variables, thereby eliminating trivial solutions. The projection loss bears similarity to the Principal Component Analysis (PCA) algorithm; in this work it is shown to have a causal interpretation. We experimentally show how the abstraction preferred by the reconstruction loss varies with respect to the causal coefficients.
Submission Number: 41
Loading