Learning Invariances for Causal Abstraction Inference

ICLR 2026 Conference Submission20995 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: causality, causal inference, causal abstractions, causal generative modeling, neural causal models, deep learning, representation learning
TL;DR: We show a connection between invariance learning and causal abstractions that allows invariance learning tools to be applied out-of-the-box to representation learning approaches for causal inference tasks.
Abstract: Causal abstraction inference is the task of inferring causal effects from limited data by first mapping the complicated low-level data (e.g., pixels) into a simpler high-level space (e.g., image representation) before performing causal inferences on the high-level. A major restriction in this task is known as the abstract invariance condition (AIC), which forces high-level representations to retain all information from the low-level data to prevent any ambiguity in high-level inference. In this work, we provide the first approach that can learn low-dimensional high-level representations that satisfy the strictest form of the AIC without weakening the allowable causal inferences. We show how the concept of invariances, such as rotational invariance in image data, is related to causal abstractions and how they can be used to learn lower dimensional representations using out-of-the-box invariance learning tools such as contrastive learning. Finally, we demonstrate our findings empirically, including in a high-dimensional image setting.
Primary Area: causal reasoning
Submission Number: 20995
Loading