Abstract: In self-supervised image denoising, it is challenging to construct paired noisy samples from a single noisy observation, and the quality of samples seriously influences the performance of the denoising model. Strategies for constructing pairs of samples for learning, such as blind-spot convolution and sub-sampling, are widely adopted in existing self-supervised denoising methods. However, these strategies suffer from the severe problems of information underutilization and pixel misalignment, which seriously hinder the further improvement of denoising performance. Furthermore, little attention has been paid to the sensitivity of denoising models to deal with unknown noise, which is of great significance in enhancing the practicality of denoising models. To overcome these challenges, we propose a very simple and effective method, called Cyclic Shift, to construct paired noisy images for self-supervised training. This new strategy solves the problems of information underutilization and pixel misalignment without additional computation, and it can be easily embedded into existing denoising methods and significantly boost their performance. In addition, we introduce the uncertainty-aware loss in training to enable the denoising network to perceive the noise intensity and have robust denoising performance. We theoretically explain the effectiveness of Cyclic Shift and analyze the ability of the uncertainty loss to endow the network with noise intensity perception. Extensive experimental results show that our approach achieves state-of-the-art self-supervised image denoising performance.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Unsupervised and Self-supervised learning
5 Replies
Loading