Provable Derivative-Free Inference with Score-Based Generative Priors

ICLR 2026 Conference Submission15872 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: score-based generative models, plug-and-play priors, zeroth-order approximation, inverse problems, Monte Carlo sampling
Abstract: A growing trend in solving inverse problems is to use pre-trained score-based generative models (SGMs) as plug-and-play priors. This paradigm retains the generative power of SGMs while allowing adaptation to different forward models without requiring re-training. In parallel, derivative-free posterior sampling algorithms have gained increasing attention for solving inverse problems where the derivative, pseudo-inverse, or full knowledge of the forward model is unavailable or impractical to compute. Despite their success, these methods lack principled foundations and provide no convergence guarantees to the true posterior distribution or to its $\varepsilon$-accurate approximation. We propose \textit{zeroth-order annealed plug-and-play Monte Carlo (ZO-APMC)}, the first principled derivative-free framework for solving general inverse problems that requires only forward-model evaluations and a pre-trained SGM prior. We derive complexity bounds for obtaining samples with $\varepsilon$-relative Fisher information under a non-log-concave likelihood distribution and, under a Poincar\'e inequality assumption, $\varepsilon$-accuracy in total variation distance, and we establish weak convergence of ZO-APMC to the target posterior. We verify our theory with numerical experiments and demonstrate its performance on both linear and nonlinear inverse problems.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 15872
Loading