Independent Mechanism Analysis and the Manifold Hypothesis

Published: 27 Oct 2023, Last Modified: 05 Dec 2023CRL@NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: Independent Mechanism Analysis, Independent Component Analysis, Manifold Hypothesis, Identifiability, Representation Learning, Genericity, High-dimensional data, Concentration Inequalities
TL;DR: We study Independent Mechanism Analysis (IMA) under the manifold hypothesis. We show that it circumvents non-identifiability issues in that setting, and provide a new interpretation of the IMA principle as the consequence of a genericity assumption.
Abstract: Independent Mechanism Analysis (IMA) seeks to address non-identifiability in nonlinear Independent Component Analysis (ICA) by assuming that the Jacobian of the mixing function has orthogonal columns. As typical in ICA, previous work focused on the case with an equal number of latent components and observed mixtures. Here, we extend IMA to settings with a larger number of mixtures that reside on a manifold embedded in a higher-dimensional space—in line with the _manifold hypothesis_ in representation learning. For this setting, we show that IMA still circumvents several non-identifiability issues, suggesting that it can also be a beneficial principle for higher-dimensional observations when the manifold hypothesis holds. Further, we prove that the IMA principle is approximately satisfied with high probability (increasing with the number of observed mixtures) when the directions along which the latent components influence the observations are chosen independently at random. This provides a new and rigorous statistical interpretation of IMA.
Submission Number: 41
Loading