Partial Disentanglement via Mechanism SparsityDownload PDF

Published: 09 Jul 2022, Last Modified: 22 Oct 2023CRL@UAI 2022 OralReaders: Everyone
Keywords: Causal representation learning, disentanglement, nonlinear ICA, causal discovery
TL;DR: We generalize disentanglement via mechanism sparsity and introduce a novel equivalence relation which specifies which latent factors are expected to remain entangled and which are not based the ground-truth graph.
Abstract: Disentanglement via mechanism sparsity was introduced recently as a principled approach to extract latent factors without supervision when the causal graph relating them in time is sparse, and/or when actions are observed and affect them sparsely. However, this theory applies only to ground-truth graphs satisfying a specific criterion. In this work, we introduce a generalization of this theory, which applies to any ground-truth graph and specifies qualitatively how disentangled the learned representation is expected to be, via a new equivalence relation over models we call consistency. This equivalence captures which factors are expected to remain entangled and which are not based on the specific form of the ground-truth graph. We call this weaker form of identifiability partial disentanglement. The graphical criterion that allows complete disentanglement, proposed in an earlier work, can be derived as a special case of our theory. Finally, we propose to enforce graph sparsity with constrained optimization and illustrate our theory and algorithm in simulations.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2207.07732/code)
4 Replies

Loading