Disentanglement via Mechanism Sparsity Regularization: A New Principle for Nonlinear ICADownload PDF

Published: 09 Feb 2022, Last Modified: 05 May 2023CLeaR 2022 PosterReaders: Everyone
Keywords: Causal representation learning, disentanglement, nonlinear ICA, causal discovery
TL;DR: We propose a novel principled approach to disentanglement based on learning a sparse causal graphical model of the latent factors.
Abstract: This work introduces a novel principle we call disentanglement via mechanism sparsity regularization, which can be applied when the latent factors of interest depend sparsely on past latent factors and/or observed auxiliary variables. We propose a representation learning method that induces disentanglement by simultaneously learning the latent factors and the sparse causal graphical model that relates them. We develop a rigorous identifiability theory, building on recent nonlinear independent component analysis (ICA) results, that formalizes this principle and shows how the latent variables can be recovered up to permutation if one regularizes the latent mechanisms to be sparse and if some graph connectivity criterion is satisfied by the data generating process. As a special case of our framework, we show how one can leverage unknown-target interventions on the latent factors to disentangle them, thereby drawing further connections between ICA and causality. We propose a VAE-based method in which the latent mechanisms are learned and regularized via binary masks, and validate our theory by showing it learns disentangled representations in simulations.
11 Replies