Abstract: Recent image quality assessment (IQA) methods typically focus on predicting the mean opinion score (MOS) of image quality, ignoring the image quality score distribution. This distribution provides valuable information beyond the MOS, including the standard deviation of opinion scores (SOS) and opinion scores at different quality levels. This paper introduces a novel no-reference IQA method that predicts the image quality score distribution to estimate the MOS. The proposed method consists of three modules: a visual feature extraction module, a graph convolutional module, and a MOS prediction module. In the visual feature extraction module, a convolutional neural network is designed to extract both first- and second-order visual features of images. The graph convolutional module employs a graph convolutional network (GCN)-based mapper to map these visual features to the image quality score distribution by exploring correlations between quality labels. The MOS is then derived from the predicted image quality score distribution in the MOS prediction module. We are the first to jointly train the method using both the MOS and the image quality score distribution, enabling it to learn richer subjective information and improve prediction performance. To address the lack of the ground-truth image quality score distribution in some IQA databases, we propose to use a SOS assumption to generate a Gaussian-based image quality score distribution that better reflects subjective perception. Additionally, we design appropriate loss functions for training. Experimental results demonstrate that our method effectively predicts both the image quality score distribution and the MOS, outperforming most state-of-the-art IQA methods.
Loading