Integrating Intermediate Layer Optimization and Projected Gradient Descent for Solving Inverse Problems with Diffusion Models
Abstract: Inverse problems (IPs) involve reconstructing signals from noisy observations. Recently, diffusion models (DMs) have emerged as a powerful framework for solving IPs, achieving remarkable reconstruction performance. However, existing DM-based methods frequently encounter issues such as heavy computational demands and suboptimal convergence. In this work, building upon the idea of the recent work DMPlug, we propose two novel methods, DMILO and DMILO-PGD, to address these challenges. Our first method, DMILO, employs intermediate layer optimization (ILO) to alleviate the memory burden inherent in DMPlug. Additionally, by introducing sparse deviations, we expand the range of DMs, enabling the exploration of underlying signals that may lie outside the range of the diffusion model. We further propose DMILO-PGD, which integrates ILO with projected gradient descent (PGD), thereby reducing the risk of suboptimal convergence. We provide an intuitive theoretical analysis of our approaches under appropriate conditions and validate their superiority through extensive experiments on diverse image datasets, encompassing both linear and nonlinear IPs. Our results demonstrate significant performance gains over state-of-the-art methods, highlighting the effectiveness of DMILO and DMILO-PGD in addressing common challenges in DM-based IP solvers.
Lay Summary: Inverse problems involve recovering a clear signal (like a sharp image) from blurry or noisy data. Recently, diffusion models, a powerful class of generative models, have achieved impressive results in solving these problems. However, existing methods often face challenges such as suboptimal solutions and high computational demands. In this paper, we propose two new methods, DMILO and DMILO-PGD, to address these issues. Our first method, DMILO, reduces memory usage by decoupling the entire optimization process into a series of smaller steps, making it more efficient. We also introduce a technique that enables the model to explore a wider range of possible solutions, including those it wasn’t originally trained to handle. The second method, DMILO-PGD, combines this approach with an optimization strategy called projected gradient descent to mitigate suboptimal problems. Our experiments demonstrate the effectiveness of both methods, making it easier and more reliable to recover high-quality images from blurry or noisy data.
Primary Area: Deep Learning->Other Representation Learning
Keywords: diffusion model, inverse problems
Flagged For Ethics Review: true
Submission Number: 9909
Loading