Dic-UCSNet: A Novel Feature Dictionary-Based Underwater Image Compressive Sensing Framework

17 Sept 2025 (modified: 12 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: underwater image (UWI), compressive sensing, reference-based image reconstruction, underwater detection
Abstract: The underwater image (UWI) is one of the main sources from which researchers can obtain underwater information, thus its quality directly determines the effect and accuracy of subsequent high-level tasks. Since most existing compressive sensing (CS) algorithms are for on-land images which differ greatly from UWIs, applying them to UWIs leaves much room for performance improvement. compared to on-land images,there exists amount of similar features among different UWIs, which is caused by the fact that underwater scenes are simpler and contain fewer semantics. Different UWIs often share semantically-identical objects that have structural and feature similarities. To further improve the performance of CS by exploiting the inter-UWIs similarity, we propose a feature dictionary-based CS framework for UWIs, dubbed Dic-UCSNet. Specifically, we first construct a multi-scale discrete codebook as the underwater feature dictionary (UF-Dic), which can provide the inter-image similarity prior to underwater CS task. Subsequently, to better match the dictionary features with the input ones to improve the utilization of the dictionary features, we propose an underwater dictionary feature fusion module (UDFF-Module), which uses the underwater physical prior to transform the degradation style of the dictionary features to input ones, and then adaptively adjusts the dictionary features according to the difference map. Experimental results on three real-world UWIs datasets show that compared with other state-of-the-art CS methods, our Dic-UCSNet can achieve an average improvement of 5\% to 15\% in objective metrics (PSNR/SSIM/LPIPS/NIQE) and obtain the best visual quality under all testing sampling rates (0.01, 0.04, 0.1 and 0.3).
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 9108
Loading