Abstract: Learning causal representations without assumptions is known to be fundamentally impossible, thus establishing the need for suitable inductive biases. At the same time, the invariance of causal mechanisms has emerged as a promising principle to address the challenge of out-of-distribution prediction which machine learning models face. In this work, we explore this invariance principle as a candidate assumption to achieve identifiability of causal representations. While invariance has been utilized for inference in settings where the causal variables are observed, theoretical insights of this principle in the context of causal representation learning are largely missing. We assay the connection between invariance and causal representation learning by establishing impossibility results which show that invariance alone is insufficient to identify latent causal variables. Together with practical considerations, we use our results to reflect generally on the commonly used notion of identifiability in causal representation learning and potential adaptations of this goal moving forward.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Mingming_Gong1
Submission Number: 2756
Loading