Abstract: Multispectral pedestrian detection is of great importance in various around-the-clock applications, i.e., self-driving and video surveillance. Fusing the features from RGB images and thermal infrared (TIR) images to explore the complementary information between different modalities is one of the most effective manners to improve multispectral pedestrian detection performance. However, the misalignment between different modalities in spatial dimension and modality reliability would introduce harmful information during feature fusion, limiting the performance of multispectral pedestrian detection. To address the above issues, we propose an attentive alignment network, consisting of an attentive position alignment (APA) module and an attentive modality alignment (AMA) module. Our APA module emphasizes pedestrian regions while aligning the pedestrian regions between different modalities. Our AMA module utilizes a channel-wise attention mechanism with illumination guidance to eliminate the imbalance between different modalities. The experiments are conducted on two widely used multispectral detection datasets, KASIT and CVC-14. Our approach surpasses the current state-of-the-art performance on both datasets.
0 Replies
Loading