VIPaint: Image Inpainting with Pre-Trained Diffusion Models via Variational Inference

Published: 06 Mar 2025, Last Modified: 24 Apr 2025FPI-ICLR2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Image Inpainting, Variational Inference
TL;DR: We develop an test-time variational inference algorithm to inpaint images, focusing on large masking ratios, using a pre-trained (latent) diffusion model.
Abstract: Diffusion probabilistic models learn to remove noise added during training, generating novel data (e.g., images) from Gaussian noise through sequential denoising. However, conditioning the generative process on corrupted or masked images is challenging. While various methods have been proposed for inpainting masked images with diffusion priors, they often fail to produce samples from the true conditional distribution, especially for large masked regions. Additionally, many can't be applied to latent diffusion models which have been demonstrated to generate high-quality images, while offering efficiency in model training. We propose a hierarchical variational inference algorithm that optimizes a non-Gaussian Markov approximation of the true diffusion posterior. Our VIPaint method outperforms existing approaches in both plausibility and diversity of imputations and is easily extended to other inverse problems like deblurring and superresolution.
Submission Number: 13
Loading