Chasing Better Deep Image Priors between Over- and Under-parameterization

Published: 16 Aug 2023, Last Modified: 16 Aug 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Deep Neural Networks (DNNs) are well-known to act as \textbf{over-parameterized} deep image priors (DIP) that regularize various image inverse problems. Meanwhile, researchers also proposed extremely compact, \textbf{under-parameterized} image priors (e.g., deep decoder) that are strikingly competent for image restoration too, despite a loss of accuracy. These two extremes push us to think whether there exists a better solution in the middle: \textit{between over- and under-parameterized image priors, can one identify ``intermediate" parameterized image priors that achieve better trade-offs between performance, efficiency, and even preserving strong transferability?} Drawing inspirations from the lottery ticket hypothesis (LTH), we conjecture and study a novel ``lottery image prior" (\textbf{LIP}) by exploiting DNN inherent sparsity, stated as: \textit{given an over-parameterized DNN-based image prior, it will contain a sparse subnetwork that can be trained in isolation, to match the original DNN's performance when being applied as a prior to various image inverse problems}. Our results validate the superiority of LIPs: we can successfully locate the LIP subnetworks from over-parameterized DIPs at substantial sparsity ranges. Those LIP subnetworks significantly outperform deep decoders under comparably compact model sizes (by often fully preserving the effectiveness of their over-parameterized counterparts), and they also possess high transferability across different images as well as restoration task types. Besides, we also extend LIP to compressive sensing image reconstruction, where a \textit{pre-trained} GAN generator is used as the prior (in contrast to \textit{untrained} DIP or deep decoder), and confirm its validity in this setting too. To our best knowledge, this is the first time that LTH is demonstrated to be relevant in the context of inverse problems or image priors. Codes are available at https://github.com/VITA-Group/Chasing-Better-DIPs.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: 1. We have added connections to relevant literature and discussions in sections 2.2 and 2.3. 2. To discuss the current limitations of LIP subnetworks and some failure cases, we have added a new section titled "Limitations and Future Work" in the revised paper. 3. We have added layer-wise sparsity ratio results of classification tickets in Figure 7 (especially, Figure 7 (c) and (f)). 4. We have added discussions in section 4 to answer the question "Why do the LIP Subnetworks Outperform the Dense Model with fewer Parameters at the Beginning?". 5. We have clarified the significance of the GAN CS portion and added explanations at the beginning of Section 5. 6. Add experiments on the SGLD DIP model, which is one of the state-of-the-art DIP-based frameworks. The experiments are summarized in Figure 16 and the discussions are located in the paragraph "How is the Performance of LIP Subnetworks on Other DIP-based Architectures?" in Appendix B. 7. Add discussions about LIP, NAS-DIP, and ISNAS-DIP models in the paragraph "More discussions about LIP, NAS-DIP, and ISNAS-DIP." in Appendix B.
Code: https://github.com/VITA-Group/Chasing-Better-DIPs
Assigned Action Editor: ~Yanwei_Fu2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 959
Loading