Invertible generative models for inverse problems: mitigating representation error and dataset biasDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Invertible generative models, inverse problems, generative prior, Glow, compressed sensing, denoising, inpainting.
TL;DR: Invertible generative neural networks provide effective natural image priors for inverse problems, outperforming GAN and Lasso priors in Compressive Sensing Problems, while exhibiting strong out-of-distribution performance.
Abstract: Trained generative models have shown remarkable performance as priors for inverse problems in imaging. For example, Generative Adversarial Network priors permit recovery of test images from 5-10x fewer measurements than sparsity priors. Unfortunately, these models may be unable to represent any particular image because of architectural choices, mode collapse, and bias in the training dataset. In this paper, we demonstrate that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting. Our formulation is an empirical risk minimization that does not directly optimize the likelihood of images, as one would expect. Instead we optimize the likelihood of the latent representation of images as a proxy, as this is empirically easier. For compressive sensing, our formulation can yield higher accuracy than sparsity priors across almost all undersampling ratios. For the same accuracy on test images, they can use 10-20x fewer measurements. We demonstrate that invertible priors can yield better reconstructions than sparsity priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.
Code: https://drive.google.com/file/d/1oqm_fnh3l7NP0Dycxq744mbH_-SU-KIf/view
Original Pdf: pdf
9 Replies

Loading