Keywords: diffusion models, inverse problems, amortized optimization
TL;DR: Amortized optimization for variational diffusion posterior sampling
Abstract: Diffusion models pre-trained on large datasets are powerful priors for inverse problems such as super-resolution and inpainting.
Zero-shot variational diffusion posterior sampling achieves state-of-the-art reconstructions without task-specific training but is slow due to costly test-time optimization.
Supervised diffusion for inverse problems offers fast inference, yet demands large datasets and often fails under unseen degradations.
We introduce a best-of-two-worlds strategy that jointly leverages upstream training and test-time likelihood guidance.
An amortized inference model, trained on a small paired dataset, predicts a good initialization for a variational approximation problem involved in variational diffusion posterior sampling, while retaining the explicit use of the degradation operator to guide inference.
This combination removes several gradient updates at inference, yielding up to a ×1.31 speedup compared to zero-shot posterior sampling. Importantly, it remains robust against out-of-distribution degradation operators and training settings with limited data (e.g., 1% of the pre-training data), outperforming the supervised diffusion baselines in these scenarios.
Our results show that coupling modest training with test-time operator knowledge can unlock fast, flexible, and high-quality diffusion reconstructions.
Primary Area: generative models
Submission Number: 7423
Loading