Abstract: Designing learning-based no-reference (NR) video quality assessment (VQA) algorithms for camera-captured videos is cumbersome due to the large number of human annotations of quality. In this work, we propose a semi-supervised learning (SSL) framework exploiting many unlabelled and very limited numbers of authentically distorted labelled videos. Our main contributions are twofold. Leveraging the benefits of consistency regularization and pseudo-labelling, our SSL model generates pairwise pseudo-ranks for the unlabelled videos using a student-teacher model on strong-weak augmented videos. We design the strong-weak augmentations to be quality invariant to use the unlabelled videos effectively in SSL. The generated pseudo-ranks are used along with the limited labels to train our SSL model. Our primary focus in SSL for NR VQA is to learn mapping from video feature representations to quality scores. We compare various feature extraction methods and show that our SSL framework can lead to improved performance on these features. We present a spatial and temporal feature extraction method based on predicting spatial and temporal entropic differences. We show that these features help achieve robust performance when trained with limited data, providing a better baseline to apply SSL. Extensive experiments on three popular VQA datasets demonstrate that the proposed semi-supervised VQA method improves on the performance of existing methods in terms of correlation with human opinion by approximately $15 \! - \! 20 \%$
Loading