Keywords: flow matching, out-of-distribution detection
Abstract: Flow matching models are able to learn complex conditional distributions from data. Nevertheless, they do not model the distribution of the conditioning itself, which means they can confidently generate samples from conditioning inputs that are not in the training distribution. In this work, we introduce _Diverging Flows_, an approach to train flow matching models that enables a single model to detect OOD conditions, without hindering its generative capabilities. _Diverging Flows_ augments standard flow matching training with a contrastive objective that learns to separate the velocity fields produced by in- and out-of-distribution conditions, effectively modeling the conditions' distribution, and practically enforcing an effective telltale sign during the generation process. At inference time, we combine this signal with conformal prediction to obtain statistically valid OOD decisions. Additionally, _Diverging Flows_ does not require real OOD data, enabling fully self-contained training on the target domain. The results indicate that _Diverging Flows_ is competitive with other OOD detection methods while preserving the predictive quality of the underlying flow model. Ultimately, these results pave the way in adopting generative models as safe and robust predictors in high-stakes domains like weather forecasting, robotics, and medical applications.
Primary Area: generative models
Submission Number: 20182
Loading