Solving Inverse Problems with Stochastic Interpolants: Self-Consistent Generative Modeling from Corrupted Data
Track: Main Track
Keywords: inverse problems, generative modeling, stochastic interpolant, corrupted data, self-consistency
TL;DR: Learn a data distribution using stochastic interpolants, requiring only black-box access to the observation model and its outputs.
Abstract: Transport-based methods have emerged as a leading paradigm for building generative models from large datasets. However, in many scientific and engineering domains, clean data samples are often unavailable: instead, we only observe corrupted measurements obtained through a noisy, ill-conditioned forward map. We introduce a novel framework for *inverse generative modeling* that learns to generate clean data using only these corrupted observations. Our approach leverages stochastic interpolants to construct a self-consistent training procedure: we iteratively transport corrupted observations to clean data samples, then verify consistency by passing the generated samples back through the forward map to match the original observation distribution. This bypasses the need for clean training data while maintaining theoretical guarantees. The resulting method is (i) computationally efficient compared to variational alternatives, and (ii) highly flexible, handling arbitrary nonlinear forward models with only black-box access. We demonstrate superior performance on a variety of inverse problems arising in imaging applications.
Submission Number: 68
Loading