PRNet: A Progressive Regression Network for No-Reference User-Generated-Content Video Quality AssessmentDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Non-professional video, commonly known as User Generated Content (UGC) has become very popular in today’s video sharing applications. However, objectively perceptual quality assessment of UGC-videos is still a challenge problem, which is arose from many reasons. First, the pristine sources of UGC-videos are not available, which makes the appropriate technique is the no-reference NR video quality assessment VQA (NR-VQA). Another factor leads the NR-UGC-VQA to a challenge is that subjective mean option scores (MOS) of all the UGC-datasets are not uniformly distributed. The largest UGC video dataset---YouTube-UGC still faces a problem that the database has right-skewed MOS distribution. In addition, authentic degradations occurred in the videos are not unique, therefore, not predicable. For example, an over- or under-exposure image/video, brightness and contrast static information is important for evaluation. Only employing verified priori statistic knowledge or generalized learning knowledge may not cover all possible distortions. To solve these problems, we introduce a novel NR-VQA framework---Progressive Regress Network (PRNet) in this paper. For the skewed MOS problem, a progressive regression model is proposed, which utilizes the coarse-to-fine strategy during the training process. This strategy can turn sparse subjective human rating scores into integers with denser samples, which can solve the in-balanced sample problem and make the training progress smoother. For the unpredictable distortions problem, a wide and deep model based on our PRNet is developed, which employs both low-level features generated from natural scene statistics (NSS) and high-level semantic features extracted by deep neural networks, to fuse memorizing priori knowledge and generalizing learning features. Our experimental results demonstrate that our proposed method PRNet achieves state-of-the-art performance in currently three main popular UGC-VQA datasets (KoNVid-1K, LIVE-VQC, and YouTube-UGC).
One-sentence Summary: proposes a novel no-reference video-quality-assessment framework and achieves SOTA performance in three main datasets.
4 Replies

Loading