Subjective and Objective Quality Assessment of Display Content Videos

Yijie Huang, Fangfang Lu, Huiqun Yu, Kaiwei Zhang, Wei Sun, Xiongkuo Min, Guangtao Zhai

Published: 01 Jan 2025, Last Modified: 26 Jan 2026IEEE Transactions on Circuits and Systems for Video TechnologyEveryoneRevisionsCC BY-SA 4.0
Abstract: Display quality assessment plays a crucial role in evaluating the performance of display devices. However, existing video quality assessment methods primarily target compression-related distortions, failing to capture display-specific degradations including definition loss, color distortions, and motion artifacts that critically affect user subjective experiences during video playback. To address these limitations, we develop a specialized video dataset, namely Video Displaying Quality Assessment Dataset (VDQA), constructed using a DSLR camera with standardized parameter optimization of exposure settings (aperture, ISO sensitivity, and shutter speed). VDQA comprises 250 high-resolution video clips covering diverse content categories, providing a robust foundation for evaluating display devices across multiple quality dimensions. Additionally, we propose a deep learning-based model specifically designed for display quality assessment that employs three complementary pathways to independently evaluate definition, color fidelity, and motion quality. The model integrates Canny edge detection for explicit sharpness measurement, a color attention mechanism to enhance sensitivity to display color reproduction characteristics, and temporal modeling for motion artifact assessment. Experimental results demonstrate that the proposed model achieves superior performance in reflecting user subjective experiences for display content videos compared to state-of-the-art methods, with significant improvements in both color fidelity assessment and definition evaluation.
Loading