A Scalable Training Strategy for Blind Multi-Distribution Noise RemovalDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: denoising, image restoration, curriculum learning
TL;DR: A Scalable Training Strategy for Blind Multi-Distribution Noise Removal
Abstract: Despite recent advances, developing general-purpose universal denoising and artifact-removal networks remains largely an open problem: Given fixed network weights, one inherently trades-off specialization at one task (e.g., removing Poisson noise) for performance at another (e.g., removing speckle noise). In addition, training such a network is challenging due to the curse of dimensionality: As one increases the dimensions of the specification-space (i.e., the number of parameters needed to describe the noise distribution) the number of unique specifications one needs to train for grows exponentially. Uniformly sampling this space will result in a network that does well at very challenging problem specifications but poorly at easy problem specifications, where even large errors will have a small effect on the overall mean-squared-error. In this work we propose training denoising networks using an adaptive-sampling strategy. Our work improves upon a recent universal denoiser training strategy by extending the results to higher dimensions and by incorporating a polynomial approximation of the true specification-loss landscape. We test our method on joint Poisson-Gaussian-speckle noise and demonstrate that, with our training strategy, a single trained generalist denoiser network can achieve mean-squared-errors within a relatively uniform bound of specialized denoiser networks across a large range of operating conditions.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
10 Replies

Loading