Abstract: Due to the complexity of underwater imaging environments, underwater images often suffer from blurriness, low contrast and color distortion, presenting a great challenge for underwater tasks. In this article, we propose a vector quantized underwater image enhancement network, which takes full advantage of generative adversarial networks and transformers through quantization. The proposed method consists of two parts: a vector quantized generative network and an axial flow-guided latent transformer. The vector quantized generative network first learns discrete content representations of underwater images through a vector quantized codebook. To facilitate deep feature extraction, an enhanced residual attention module that exploits the strengths of residual connection and channel-wise attention is introduced. After representing the content representation using codebook-indices, we use the axial flow-guided latent transformer to learn the content distribution in an autoregressive manner. The collaboration of generative adversarial networks and transformers assists in capturing both local and global dependencies in underwater images. Experimental results on publicly available data sets comprehensively validate the remarkable performance of the proposed method in underwater image enhancement tasks.
Loading