Redundant Feature-Processing Module Based on Dual-Branch for Underwater Image Enhancement

Published: 01 Jan 2024, Last Modified: 06 Nov 2025IEEE Trans. Instrum. Meas. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Due to the scattering and absorption of light, underwater images face challenges such as color cast, low contrast, and blur. Existing deep learning-based underwater image enhancement (UIE) algorithms often struggle to balance task requirements, enhanced image quality, and computing speed. To address these issues, this article proposes a task-friendly framework suitable for simultaneous enhancement and super resolution (SESR) of underwater scenes. First, it classifies underwater image illumination to reduce network burden and enhance network performance through functional decomposition. Second, it introduces a redundant feature-processing module based on dual-branch (RFMD), designed according to underwater imaging characteristics, and employs an external network to generate latent codes for reprocessing redundant features. RFMD effectively balances texture, color, contrast, and saturation across different tasks. Third, a sparse redundant feature-processing module based on dual-branch (SRFMD) is proposed, which significantly improves the network’s generalization ability and computing speed. The experimental data demonstrate that RFMD and SRFMD outperform traditional methods [IBLA and relative global histogram stretching (RGHS)] and learning-based methods (Ucolor, FUnIE-GAN, FSRCNN, SRResNet, WDSR, ESRGAN, and URSCT-SESR) in balancing color, saturation, contrast, and texture information, as shown by both qualitative and quantitative evaluations. The code is publicly available at: https://github.com/ZHANGYW1/Class_RFMD.
Loading