Keywords: pedestrian detection, cross-modality, genetic algorithm, feature enhance
TL;DR: This paper mainly focus on the topic of "AI-based analysis of human movement". We proposed a novel moving pedestrian detection method based on multi-modal fusion and feature enhancement.
Abstract: Recently, methods for moving pedestrian detection have attracted considerable attention. But it has great challenges in feature extraction from low-quality visible-light images in nighttime. Infrared image, with its excellent sensitivity to the thermal information of objects, provides a more effective complementary. Thus, multimodal pedestrian detection methods often yield superior results. This paper proposes a novel pedestrian detection method utilizing adaptive feature enhancement. Our approach is adopted in the UNet backbone, incorporating a cross-modal attention-guided learning module with an adaptive structure extraction module at the shallow layers and a channel exchange module based on genetic algorithms at the deep layers. This feature enhancement module is fully embedded within the detection network. The effectiveness and robustness of our method are tested on the LLVIP dataset, which includes paired infrared and visible-light pedestrian images with annotations. The results show that our method significantly improves pedestrian detection accuracy and notably enhances the quality of pedestrian images in low-light road scenes.
Submission Number: 7
Loading