A GAN-based input-size flexibility model for single image dehazingOpen Website

2022 (modified: 07 Apr 2022)Signal Process. Image Commun. 2022Readers: Everyone
Abstract: Highlights • First, an end-to-end input-size flexibility cGAN model is proposed for single image dehazing. • Second, a simple and effective UR-Net structure is designed based on the popular U-Net structure and residual learning. • Third, a consistency loss is proposed to keep the transformation consistency between dehazing image and real image. Abstract Image-to-image translation based on generative adversarial network (GAN) has achieved state-of-the-art performance in various image restoration applications. Single image dehazing is a typical example, which aims to obtain the haze-free image of a haze one. This paper concentrates on the challenging task of single image dehazing. Based on the atmospheric scattering model, a novel model is designed to directly generate the haze-free image. The main challenge of image dehazing is that the atmospheric scattering model has two parameters, i.e., transmission map and atmospheric light. When they are estimated respectively, the errors will be accumulated to compromise the dehazing quality. Considering this reason and various image sizes, a novel input-size flexibility conditional generative adversarial network (cGAN) is proposed for single image dehazing, which is input-size flexibility at both training and test stages for image-to-image translation with cGAN framework. A simple and effective U-connection residual network (UR-Net) is proposed to combine the generator and adopt the spatial pyramid pooling (SPP) to design the discriminator. Moreover, the model is trained with multi-loss function, in which the consistency loss is a novel designed loss in this paper. Finally, a multi-scale cGAN fusion model is built to realize state-of-the-art single image dehazing performance. The proposed models receive a haze image as input and directly output a haze-free one. Experimental results demonstrate the effectiveness and efficiency of the proposed models.
0 Replies

Loading