Abstract: Learning based image quality assessment (IQA)
models have obtained impressive performance with the help
of reliable subjective quality labels, where mean opinion score
(MOS) is the most popular choice. However, in view of the
subjective bias of individual annotators, the labor-abundant MOS
(LA-MOS) typically requires a large collection of opinion scores
from multiple annotators for each image, which significantly
increases the learning cost. In this paper, we aim to learn robust
IQA models from low-cost MOS (LC-MOS), which only requires
very few opinion scores or even a single opinion score for each
image. More specifically, we consider the LC-MOS as the noisy
observation of LA-MOS and enforce the IQA model learned from
LC-MOS to approach the unbiased estimation of LA-MOS. In
this way, we represent the subjective bias between LC-MOS
and LA-MOS, and the model bias between IQA predictions
learned from LC-MOS and LA-MOS (i.e., dual-bias) as two
latent variables with unknown parameters. By means of the
expectation-maximization based alternating optimization, we can
jointly estimate the parameters of the dual-bias, which suppresses
the misleading of LC-MOS via a gated dual-bias calibration
(GDBC) module. To the best of our knowledge, this is the
first exploration of robust IQA model learning from noisy lowcost labels. Theoretical analysis and extensive experiments on
four popular IQA datasets show that the proposed method is
robust toward different bias rates and annotation numbers and
significantly outperforms the other learning based IQA models
when only LC-MOS is available. Furthermore, we also achieve
comparable performance with respect to the other models learned
with LA-MOS.
Loading