Abstract: Display quality assessment plays a crucial role in evaluating the performance of display devices. However, existing video quality assessment methods primarily target compression-related distortions, failing to capture display-specific degradations including definition loss, color distortions, and motion artifacts that critically affect user subjective experiences during video playback. To address these limitations, we develop a specialized video dataset, namely Video Displaying Quality Assessment Dataset (VDQA), constructed using a DSLR camera with standardized parameter optimization of exposure settings (aperture, ISO sensitivity, and shutter speed). VDQA comprises 250 high-resolution video clips covering diverse content categories, providing a robust foundation for evaluating display devices across multiple quality dimensions. Additionally, we propose a deep learning-based model specifically designed for display quality assessment that employs three complementary pathways to independently evaluate definition, color fidelity, and motion quality. The model integrates Canny edge detection for explicit sharpness measurement, a color attention mechanism to enhance sensitivity to display color reproduction characteristics, and temporal modeling for motion artifact assessment. Experimental results demonstrate that the proposed model achieves superior performance in reflecting user subjective experiences for display content videos compared to state-of-the-art methods, with significant improvements in both color fidelity assessment and definition evaluation.
External IDs:doi:10.1109/tcsvt.2025.3642689
Loading