Multi-Modal Feature Enhanced Pedestrian Detection in Nighttime

Published: 15 May 2024, Last Modified: 15 May 2024WIHR 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: pedestrian detection, cross-modality, genetic algorithm, feature enhance
Abstract: Recently, methods for moving pedestrian detection have attracted considerable attention. But it has great challenges in feature extraction from low-quality visible-light images in nighttime. Infrared image, with its excellent sensitivity to the thermal information of objects, provides a more effective complementary. Thus, multimodal pedestrian detection methods often yield superior results. This paper proposes a novel pedestrian detection method utilizing adaptive feature enhancement. Our approach is adopted in the UNet backbone, incorporating a cross-modal attention-guided learning module with an adaptive structure extraction module at the shallow layers and a channel exchange module based on genetic algorithms at the deep layers. This feature enhancement module is fully embedded within the detection network. The effectiveness and robustness of our method are tested on the LLVIP dataset, which includes paired infrared and visible-light pedestrian images with annotations. The results show that our method significantly improves pedestrian detection accuracy and notably enhances the quality of pedestrian images in low-light road scenes.
Submission Number: 3
Loading