Abstract: In the modern era, there are numerous metrics for overall Quality of Experience (QoE), both those with Full Reference (FR), such as Peak Signal-to-Noise Ratio (PSNR) or Structural Similarity (SSIM), and those with No Reference (NR), such as Video Quality Indicators (VQI), that are successfully used in video processing systems to assess videos whose quality is diminished by various processing scenarios. However, they are not appropriate for video sequences used for jobs that require recognition (Target Recognition Videos, TRV). As a result, a significant research problem remains to accurately assess the performance of the video processing pipeline in both human and Computer Vision (CV) recognition tasks. For recognition tasks, there is a need for objective ways to assess video quality. In this research, we demonstrate that it is feasible to create a novel idea of an objective model to assess video quality for automatic licence plate recognition (ALPR) tasks in response to this demand. A representative set of image sequences is used to train, test, and validate the model. The collection of degradation scenarios is based on a digital camera model and how a scene’s luminous flux eventually transforms into a digital image. The generated degraded images are evaluated for ALPR and VQI using a CV library. The value of the F-measure parameter of 0.777 represents the measured accuracy of a model.
Loading