Abstract: Visible-infrared person re-identification (VI-ReID) seeks to identify and match individuals across visible and infrared ranges within intelligent monitoring environments. Most current approaches predominantly explore a two-stream network structure that extract global or rigidly split part features and introduce an extra modality for image compensation to guide networks reducing the huge differences between the two modalities. However, these methods are sensitive to misalignment caused by pose/viewpoint variations and additional noises produced by extra modality generating. Within the confines of this articles, we clearly consider addresses above issues and propose a Cross-modality Semantic Consistency Learning (CSCL) network to excavate the semantic consistent features in different modalities by utilizing human semantic information. Specifically, a Parsing-aligned Attention Module (PAM) is introduced to filter out the irrelevant noises with channel-wise attention and dynamically highlight the semantic-aware representations across modalities in different stages of the network. Then, a Semantic-guided Part Alignment Module (SPAM) is introduced, aimed at efficiently producing a collection of semantic-aligned fine-grained features. This is achieved by incorporating parsing loss and division loss constraints, ultimately enhancing the overall person representation. Finally, an Identity-aware Center Mining (ICM) loss is presented to reduce the distribution between modality centers within classes, thereby further alleviating intra-class modality discrepancies. Extensive experiments indicate that CSCL outperforms the state-of-the-art methods on the SYSU-MM01 and RegDB datasets. Notably, the Rank-1/mAP accuracy on the SYSU-MM01 dataset can achieve 75.72%/72.08%.
Loading