Measuring Underwater Image Quality with Depthwise-Separable Convolutions and Global-Sparse Attention

Published: 01 Jan 2023, Last Modified: 13 May 2025MMSP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Underwater Image Quality Assessment (UIQA) poses specific challenges due to blur, dispersion of light, and color garbling caused by water turbulence. Images captured by Autonomous Underwater Vehicles (AVS) correlate poorly along their RGB channels, as blue and green channel components dominate. We propose a novel lightweight no-reference (NR) UIQA architecture based on depthwise-separable convolutions and global sparse attention that surpasses the existing deep learning architectures in terms of performance as well as parameter count and Multiply-Accumulate operations (MACs). Proposed model gives 3.64% increase in SRCC and 6.11% increase in PLCC in comparison to state-of-the-art for UEQAB dataset. Our work also illustrates the effectiveness of depthwise separable convolutions for underwater image quality assessment. Our empirical analysis confirms less redundancy among the channels of a feature map when depthwise separable convolutions are used compared to standard convolutions by increasing representational efficiency.
Loading