[Re] Spatial-Adaptive Network for Single Image DenoisingDownload PDF

Jan 31, 2021 (edited Apr 08, 2021)RC2020Readers: Everyone
  • Keywords: image denoising, image restoration, image processing
  • Abstract: Reproducibility Summary In this study, we present our results and experience during replicating the paper titled "Spatial-Adaptive Network for Single Image Denoising". This paper proposes novel spatial-adaptive denoising architecture for efficient noise removal by leveraging the deformable convolutions to adapt spatial information (i.e. edges and textures). We have implemented the model from scratch in PyTorch framework, and then have conducted real and synthetic noise experiments on the corresponding datasets. We have achieved to reproduce the results qualitatively and quantitatively. Scope of Reproducibility The original paper proposes an encoder-decoder structure exploiting a residual spatial-adaptive block and a context block to capture multi-scale information for achieving the state-of-the-art on real and synthetic noise removal. Methodology We have implemented the model, namely SADNet, from scratch in PyTorch as described in the paper, and also adopted the training loop and proposed blocks from the author's code. Since the weight initialization of proposed blocks was not implicitly defined in the paper, we have decided to use the default initialization method for convolutional layers in PyTorch (i.e. Kaiming). Experiments have been completed on a single RTX 2080 Ti in 3 days for each, and it requires ~3GB GPU memory for training, and ~8GB CPU memory for loading the data, due to the file structure of datasets. Results We have achieved to reproduce the results qualitatively and quantitatively on synthetic and noise removal tasks. SADNet has the capacity to learn to remove the synthetic and real noise in images, and it produces visually-plausible outputs even after a few epochs. Moreover, we have employed SSIM and PSNR metrics to measure the quantitative performance for all settings. The quantitative results on both tasks are on-par when compared to the reported results in the paper. What was easy The code was open-source, and implemented in PyTorch, hence adopting the training loop and proposed blocks to our implementation facilitated our reproduction study. The loss function is straightforward and the architecture has a U-Net-like structure, so that we could achieve to implement the architecture in a fair time. What was difficult Due to the lack of compatibility with the current versions of PyTorch and TorchVision and the dependency on an external CUDA implementation of deformable convolutions, we have encountered several issues during our implementation. Then, we have considered to re-implement residual spatial-adaptive block and context block from scratch for deferring these dependencies, however, we could not achieve it just by referring to the paper in limited time. Therefore, we have decided to directly use the provided blocks as in the author's code. Communication with original authors We did not make any contact with the authors since we achieved to solve the issues encountered during the implementation of SADNet by examining the author's code.
  • Paper Url: https://openreview.net/forum?id=8CJIjwUrbls&noteId=r7GUWB4G6oT
  • Supplementary Material: zip
3 Replies