Keywords: Carbon capture and storage, Diffusion models, Neural operators, Inverse problems, Data assimilation, Subsurface flow modeling
TL;DR: We present Fun-DDPS, a decoupled diffusion framework for subsurface CO₂ modeling that learns geological priors separately from physics. It achieves 11x better accuracy under extreme sparsity and matches groundtruth posteriors with 4x less compute.
Abstract: Accurate characterization of subsurface flow is critical for Carbon Capture and Storage (CCS) but remains challenged by the ill-posed nature of inverse problems with sparse observations.
We present Fun-DDPS, a generative framework that combines function-space diffusion models with differentiable neural operator surrogates for both forward and inverse modeling.
Our approach learns a prior distribution over geological parameters (geomodel) using a single-channel diffusion model, then leverages a Local Neural Operator (LNO) surrogate to provide physics-consistent guidance for cross-field conditioning on the dynamics field.
This decoupling allows the diffusion prior to robustly recover missing information in parameter space, while the surrogate provides efficient gradient-based guidance for data assimilation.
We demonstrate Fun-DDPS on synthetic CCS modeling datasets, achieving two key results:
(1) For forward modeling with only 25% observations, Fun-DDPS achieves 7.7% relative error compared to 86.9% for standard surrogates (an 11× improvement), proving its capability to handle extreme data sparsity where deterministic methods fail.
(2) We provide the first rigorous validation of diffusion-based inverse solvers against asymptotically exact Rejection Sampling (RS) posteriors. Both Fun-DDPS and the joint-state baseline (Fun-DPS) achieve Jensen-Shannon divergence less than 0.06 against the ground truth. Crucially, Fun-DDPS produces physically consistent realizations free from the high-frequency artifacts observed in joint-state baselines, achieving this with 4× improved sample efficiency compared to rejection sampling.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 84
Loading