Single image deraining with dual U-Net generative adversarial network

Published: 01 Jan 2022, Last Modified: 06 Mar 2025Multidimens. Syst. Signal Process. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most of the existing deraining methods cannot preserve the details of the image while removing the rain streaks. To solve this problem, we propose a single image de-raining method with dual U-Net generative adversarial network (DU-GAN). By using two U-Net with stronger learning ability as our generator DU-GAN can not only accurately remove more rain streaks but also preserve image details. The network can make full use of image information and extract complete image features. The adversarial loss function using the proposed dual U-Net generator is utilized to generate de-rained images which are close to the ground truth. Furthermore, to obtain the better visual effects of the generated image, The L1 and structure similarity loss functions which are consistent with the human visual effect are applied to generate the final output. The synthetic rainy image datasets and real rainy image datasets are used to evaluate the effectiveness of the proposed network in the experiments. The quantitative and visual experimental results show that the proposed single image deraining method achieves state-of-the-art compared with the other single image deraining methods. The source code can be found at https://github.com/LuBei-design/DU-GAN.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview