Abstract: Recently, deep learning networks have made significant progress in underwater image enhancement, but the shortage of paired labeled datasets has become a major obstacle to further development. Some methods use synthetic datasets for training, but they overlook the distribution gap between real datasets and synthetic datasets. Thus, models trained on synthetic datasets usually cannot be effectively extended to real underwater scenarios. To tackle those problems, we present a Feature Distillation and Guide Network (FDGN) for unsupervised underwater image enhancement, which utilizes clear underwater images to assist in distorted underwater imaging enhancement. Specifically, the Global–Local Feature Distillation module includes transformers and convolution layers for feature distillation from both global and local perspectives. We also design an adaptive fusion module for dynamic fusion of distillation features. Next, we introduce a Domain Category Classifier, which encourages the network to learn features that are invariant across domains. Finally, to reduce the possible incorrect features generated during the domain adaptation approach, we present a Feature-Guided Network to rebuild the distorted underwater images and transfer the distorted features to the enhancement network. The reconstruction loss as well enhances the extractor to minimize false features and provides better results for underwater image enhancement. Extensive experimental results on different underwater datasets exhibit that our method outperforms state-of-the-art methods in both qualitative and quantitative metrics.
Loading