Keywords: Visible-Infrared Person Re-Indentification, Cross-modality, Feature Fusion Data Augmentation, Deep Mutual Learning
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
TL;DR: VI-ReID via Feature Fusion and Deep Mutual Learning
Abstract: Visible-Infrared Person Re-Identification (VI-ReID) aims to retrieve a set of person images captured from both visible and infrared camera views. Addressing the challenge of modal differences between visible and infrared images, we propose a VI-ReID network based on Feature Fusion and Deep Mutual Learning (DML). To enhance the model's robustness to color, we introduce a novel data augmentation method called Random Combination of Channels (RCC), which generates new images by randomly combining R, G, and B channels of visible images. Furthermore, to capture more informative features of individuals, we fuse the features from the middle layer of the network. To reduce the model's dependence on global features, we employ a fusion branch as an auxiliary branch, facilitating synchronous learning of global and fusion branches through Deep Mutual Learning . Extensive experiments on the SYSU-MM01 and RegDB datasets validate the superiority of our method, showcasing its excellent performance when compared to other state-of-the-art approaches.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Applications (bioinformatics, biomedical informatics, climate science, collaborative filtering, computer vision, healthcare, human activity recognition, information retrieval, natural language processing, social networks, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 36
Loading