Keywords: diffusion, guidance, steering, initializations
TL;DR: We show that using strong noise initializations alongside diffusion guidance can provably and experimentally solve fundamentally hard reward guidance problems.
Abstract: In recent years there has been a flurry of activity around using pretrained diffusion models as informed data priors for solving inverse problems, and more generally around steering these models towards certain reward models. Training-free methods like gradient guidance have offered simple, flexible approaches for these tasks, but when the reward is not informative enough, e.g., in inverse problems with highly compressive measurements, these techniques can veer off the data manifold, failing to produce realistic data samples. To address this challenge, we devise a simple algorithm, ReGuidance, that leverages prior methods' solutions as strong initializations and substantially enhancing their realism. Given a candidate solution $x$ produced by a given method, we propose inverting the solution by running the unconditional probability flow ODE in reverse starting from $x$, and then using the resulting latent as an initialization for a simple instantiation of diffusion guidance.
In toy settings, we provide theoretical justification for why this technique boosts the reward and brings $x$ closer to the data manifold. Empirically, we evaluate our algorithm on difficult image restoration tasks including large box inpainting, heavily downscaled superresolution, and high noise deblurring with both linear and nonlinear blurring operations. We find that, using a wide range of baseline methods as initializations, applying our method results in much stronger samples with better realism and measurement consistency.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 15887
Loading