Regularized Training of Intermediate Layers for Generative Models for Inverse Problems

Published: 21 Feb 2023, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Generative Adversarial Networks (GANs) have been shown to be powerful and flexible priors when solving inverse problems. One challenge of using them is overcoming representation error, the fundamental limitation of the network in representing any particular signal. Recently, multiple proposed inversion algorithms reduce representation error by optimizing over intermediate layer representations. These methods are typically applied to generative models that were trained agnostic of the downstream inversion algorithm. In our work, we introduce a principle that if a generative model is intended for inversion using an algorithm based on optimization of intermediate layers, it should be trained in a way that regularizes those intermediate layers. We instantiate this principle for two notable recent inversion algorithms: Intermediate Layer Optimization and the Multi-Code GAN prior. For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We would like to thank all reviewers and the action editor for your time and effort in providing constructive feedback on our work. Based on the discussion we have incorporated the following changes : (1) increased the test set size for CELEBA-HQ from 60 to 100 (Figure 4/5); (2) included additional metrics SSIM and LPIPS in the appendix (Figure 11,12,14,15).
Code: https://github.com/g33sean/RTIL
Assigned Action Editor: ~Vincent_Dumoulin1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 526
Loading