Latent-Space Denoising for Causal Representation Learning via Free-Energy-Guided Wasserstein Particle Flows

10 Sept 2025 (modified: 31 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Additive noise model; Wasserstein gradient flow; Causal representation learning
Abstract: Learning from corrupted observations is ubiquitous in practice, yet standard training procedures often fail under unknown nonlinear mixing and realistic noise. In causal representation learning (CRL), estimates of latent factors and their causal structure are particularly brittle to such mixing effects. We address this by denoising in a learned latent space, where the corruption approximately follows an additive noise model realized via an embedding encoder. We recover the clean latent distribution by minimizing a free-energy objective function, which couples a Kullback–Leibler divergence between the convolved clean model and the observed embedding distribution with an entropy regularizer for stability. From this objective function, we further compute the variational derivatives, derive a weighted Wasserstein gradient, and design an explicit particle flow algorithm to carry out the latent-space denoising. The resulting denoiser functions as a drop-in module for CRL and, across noisy real-world and simulated datasets, improves overall accuracy and structural recovery relative to standard CRL baselines.
Supplementary Material: zip
Primary Area: causal reasoning
Submission Number: 3669
Loading