Abstract: Underwater images frequently experience quality degradation due to refraction, back-scattering, and absorption, leading to color distortion, blurriness, and reduced visibility. Such degradation present in the underwater images can cause inaccuracies while functioning with higher advanced level computer vision applications, equipped for autonomous underwater vehicles. Despite the ability of enhancing the degraded images, existing approaches fail at preserving the localized fine edges also producing the true colors. Therefore, an effective pre-processing network is necessary for underwater image enhancement. With this motivation, we propose a frequency modulated deformable transformer network for underwater image enhancement. Initially, the features are extracted with the proposed multi-scale feature fusion feed-forward module. Further, the frequency modulated deformable attention module is proposed to reconstruct fine-level texture in the restored image. Here, we propose a spatio-channel attentive offset extractor in the modulated deformable convolution for focusing on relevant contextual information. Also, adaptive edge-preserving skip connections are proposed for propagating prominent edge features from the network’s shallow layers to its deeper layers. A comprehensive evaluation of the proposed method on synthetic and real-world datasets and extensive ablation analysis demonstrates that the proposed approach shows superior performance than existing state-of-the-art methods. The testing code is provided at https://github.com/adinathdukre/FMDTUIE.
Loading