LCIQA: A Lightweight Contrastive-Learning Framework for Image Quality Assessment via Cross-Scale Consistency Minimization

Published: 01 Jan 2025, Last Modified: 06 Nov 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: No-reference image quality assessment (NR-IQA), which functions without the need for a reference image, is a challenging yet essential task in various image processing systems and downstream vision applications, ranging from semantic recognition to image enhancement. Traditionally, numerous NR-IQA models have been developed using supervised learning methodologies, which rely heavily on the availability and quality of ground truth data. To improve the generalization capability and robustness of these models, recent studies have explored the application of contrastive learning, aiming to enhance the quality representation capacity of model backbones through a self-supervised approach. However, the training process for contrastive learning is computationally intensive, posing significant challenges in resource-constrained environments. To mitigate this issue, we propose a Lightweight Contrastive-learning-based IQA (LCIQA) framework, designed to be efficiently trained on a single GPU without relying on ground truth data. This framework maintains a fixed vision backbone and focuses on optimizing the parameters of subsequent IQA heads through contrastive learning. To accommodate a lightweight framework, we incorporate a quality task adapter to eliminate semantic biases introduced by the features extracted from the fixed-parameter backbone. A coarse-to-fine contrastive learning strategy is then employed to train the quality regression module. Extensive experiments demonstrate the superior performance of our model in terms of both accuracy and complexity. In addition, ablation studies validate the effectiveness of each component within the proposed framework.
Loading