VP-YOLO: A human visual perception-inspired robust vehicle-pedestrian detection model for complex traffic scenarios

Wenbo Liu, Xiaoyun Qiao, Chunyu Zhao, Tao Deng, Fei Yan

Published: 01 May 2025, Last Modified: 15 Nov 2025Expert Systems with ApplicationsEveryoneRevisionsCC BY-SA 4.0
Abstract: The rapidly developing intelligent vehicles can provide appropriate driving strategies for assisted driving based on the driving scenarios. As pedestrians and vehicles are the primary participants in these scenarios, accurate detection and localization of both are essential for intelligent driving systems to make reliable decisions in dynamic environments. However, many existing detection algorithms for pedestrians and vehicles lack robustness in dynamic and complex traffic conditions, leading to missed detections and false alarms that pose significant safety risks. We categorize complex traffic scenarios into three typical challenges: long-distance, truncation, and occlusion, and also focus on improving the robustness of models in solving these problems. Inspired by human visual perception, we propose a plug-and-play enhancement stage for the preliminary processing of external information. Specifically, we design a Visual Attention Module (VAM) that enhances the model’s perceptual capabilities by mimicking optic chiasm. This module collects high-quality horizontal and vertical spatial features and efficiently interacts between horizontal and vertical spatial features. Additionally, we use a Feature Reconstruction Module (FRM) to improve the quality of features and enhance the model’s inference ability. To enable accurate performance evaluation of different models in complex traffic scenarios, we propose the VP-dataset, a dedicated dataset that incorporates challenging scenes for testing. Comprehensive experiments on the KITTI benchmark, Cityscapes dataset, and the proposed VP-dataset demonstrate that our model achieves state-of-the-art performance across various challenging scenarios.
Loading