Keywords: Representation learning, identifiability, ica, causal representation learning
TL;DR: A hierarchy of generative functions for causal representation learning to consider that relaxes the injective assumption.
Abstract: Causal representation learning aims to take some entangled observation, $x$, and recover the latent causal variables $z$ from which the observation was generated using via generative function $g(\cdot): \mathcal{Z}\rightarrow \mathcal{X}$. While this problem is impossible in its full generality, there has been considerable recent progress in showing a variety of conditions in which the latents are identifiable. All of these approaches share the assumption that $g(\cdot)$ is injective: i.e. for any two observations $x_1$ and $x_2$, if $x_1 = x_2$ then the corresponding latent variables, $z_1$ and $z_2$ are equal. This assumption is restrictive but dropping it entirely would allow pathological examples that we could never hope to identify, so in order to make progress beyond injectivity, we need to make explicit the important classes of non-injective functions. In this paper we present formal hierarchy over generative functions that includes injective functions and two non-trivial classes of non-injective functions---occlusion and observable effects---that we argue are important for causal representation learning to consider. We demonstrate that the injective assumption is not necessary, by proving the first identifiability results in settings with occluded variables.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies
Loading