Keywords: Deep Generative Models, Flow-based Generative Models, Representation Learning
TL;DR: We propose "Zero-Flow" learning criterion to capture sufficient representations of data by enforcing a zero-flow velocity constraint on rectified flows.
Abstract: Flow-based methods have achieved significant success in various generative modeling tasks, capturing nuanced details within complex data distributions. However, few existing works have exploited this unique capability to resolve fine-grained structural details beyond generation tasks. This paper presents a flow-inspired framework for representation learning. First, we demonstrate that a rectified flow trained using independent coupling is zero everywhere at t=0.5 if and only if the source and target distributions are identical. We term this property the zero-flow criterion. Second, we show that this criterion can certify conditional independence, thereby extracting sufficient information from the data. Third, we translate this criterion into a tractable, simulation-free loss function that enables learning amortized Markov blankets in graphical models and latent representations in self-supervised learning tasks. Experiments on both simulated and real-world datasets demonstrate the effectiveness of our approach.
Submission Number: 87
Loading