Abstract: In recent years, denoising diffusion models have demonstrated outstanding image generation performance.
The information on natural images captured by these models is useful for many image reconstruction applications,
where the task is to restore a clean image from its degraded observations.
In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the measurements. We then combine it with a novel approach to adapting pre-trained diffusion denoising networks to their input. We examine two adaptation strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are ``nearest neighbors'' of the degraded image, retrieved from a diverse dataset with an off-the-shelf visual-language model. To evaluate our method, we test it on two state-of-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed `adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in image reconstruction tasks.
Our code will be available online upon publication.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Added comparisons to other works.
*The changes are colored in red.
Assigned Action Editor: ~Wei_Liu3
Submission Number: 1439
Loading