Solving Linear Inverse Problems Using the Prior Implicit in a DenoiserDownload PDF

Published: 06 Jul 2022, Last Modified: 22 Oct 2023NeurIPS 2020 Deep Inverse Workshop PosterReaders: Everyone
Keywords: Prior image models, linear inverse problem, transfer learning
TL;DR: We derive a method for sampling from the image prior implicit in a trained denoiser, directly or constrained by linear measurements
Abstract: Prior probability models are a central component of many image processing problems, but density estimation is notoriously difficult for high-dimensional signals such as photographic images. Deep neural networks have provided state-of-the-art solutions for problems such as denoising, which implicitly rely on a prior probability model of natural images. Here, we develop a robust and general methodology for making use of this implicit prior. We rely on a little-known statistical result due to Miyasawa (1961), who showed that the least-squares solution for removing additive Gaussian noise can be written directly in terms of the gradient of the log of the noisy signal density. We use this fact to develop a stochastic coarse-to-fine gradient ascent procedure for drawing high-probability samples from the implicit prior embedded within a CNN trained to perform blind (i.e., unknown noise level) least-squares denoising. A generalization of this algorithm to constrained sampling provides a method for using the implicit prior to solve any linear inverse problem, with no additional training. We demonstrate this general form of transfer learning in multiple applications, using the same algorithm to produce high-quality solutions for deblurring, super-resolution, inpainting, and compressive sensing.
Conference Poster: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2007.13640/code)
0 Replies

Loading