Keywords: Flow Matching, Neural Operators, Multi-Fidelity Learning, Residual-Augmented Learning, Probabilistic Operator Learning, Generative Modeling for PDEs, Resolution-Invariant Inference, Uncertainty Quantification
TL;DR: Learning probabilistic PDE surrogates via residual flow matching from low- to high-fidelity solutions, enabling data-efficient training, resolution-invariant inference, and improved uncertainty quantification.
Abstract: Learning probabilistic surrogates for partial differential equations (PDEs) remains challenging in data-scarce regimes, where high-fidelity simulations are expensive, and many generative models lack resolution invariance. We formulate multi-fidelity PDE learning as probabilistic transport between solution manifolds, where low-fidelity solvers define a reference measure, and the model learns residual flows toward high-fidelity solutions in function space. We propose an operator-valued flow matching framework that parameterizes conditional vector fields directly in infinite-dimensional spaces, enabling resolution-invariant inference without retraining. Unlike prior operator-valued flow matching methods that learn full solution distributions at a single fidelity, we explicitly parameterize transport in a residual-function space defined by a physics-based reference solver, enabling data-efficient probabilistic operator learning. The resulting model combines linear operator structure with a FiLM-conditioned Fourier neural operator to capture expressive input-dependent generative dynamics. Across advection, Burgers', and flow-through-porous-media problems, including cross-resolution tasks, our method improves data efficiency, discretization generalization, and uncertainty quantification compared with single-fidelity probabilistic neural operators.
Journal Opt In: No, I do not wish to participate
Journal Corresponding Email: sbhola@umich.edu
Submission Number: 109
Loading