Aligning Latent Spaces with Flow Priors

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation Learning, Flow Models, VAE, Image Generation
Abstract: This paper presents a novel framework for aligning learnable latent spaces to arbitrary prior distributions by leveraging flow-matching generative models as priors. Our method first pretrains a flow-matching model on the prior features to capture the underlying distribution. This fixed flow model subsequently regularizes the latent space via an alignment loss, which reformulates the flow matching objective to treat the latents as optimization targets. We formally prove that minimizing this alignment loss establishes a computationally tractable surrogate objective for maximizing a variational lower bound on the log-likelihood of latents under the prior distribution. Notably, the proposed method eliminates expensive likelihood evaluations and avoids ODE solving during optimization. As a proof of concept, we demonstrate in a controlled setting that the alignment loss landscape closely approximates the negative log-likelihood of the prior. We further validate the effectiveness of our approach by regularizing the latent spaces of autoencoders in large-scale ImageNet image generation, with diverse prior distributions, accompanied by detailed discussions and ablation studies. With both theoretical and empirical validation, our framework paves a new way for latent space alignment.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 14088
Loading