Keywords: identifiability, Jacobian, component analysis, nonlinear mixture
TL;DR: DICA identifies latent components from nonlinear mixtures by maximizing Jacobian volume, bringing identifiability without source independence, Jacobian sparsity, or auxiliary signals.
Abstract: Latent component identification from unknown *nonlinear* mixtures is a foundational challenge in machine learning, with applications in tasks such as self-supervised learning and causal representation learning. Prior work in *nonlinear independent component analysis* (nICA) has shown that auxiliary signals---such as weak supervision---can support *identifiability* of conditionally independent latent components. More recent approaches explore structural assumptions, like sparsity in the Jacobian of the mixing function, to relax such requirements. In this work, we introduce *Diverse Influence Component Analysis* (DICA), a framework that exploits the convex geometry of the mixing function’s Jacobian. We propose a *Jacobian Volume Maximization* (J-VolMax) criterion, which enables latent component identification by encouraging diversity in their influence on the observed variables. Under suitable conditions, this approach achieves identifiability without relying on auxiliary information, latent component independence, or Jacobian sparsity assumptions. These results extend the scope of identifiability analysis and offer a complementary perspective to existing methods.
Supplementary Material: zip
Primary Area: Probabilistic methods (e.g., variational inference, causal inference, Gaussian processes)
Submission Number: 24358
Loading