Learning Cross Camera Invariant Features with CCSC Loss for Person Re-identification

Published: 01 Jan 2019, Last Modified: 12 Aug 2024ICIG (1) 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Person re-identification (re-ID) is mainly deployed in the multi-camera surveillance scene, which means that learning cross camera invariant features is highly required. In this paper, we propose a novel loss named Cross Camera Similarity Constraint loss (CCSC loss), which makes full use of the camera ID information and the person ID information simultaneously to construct cross camera image pairs and performs cosine similarity constraint on them. The proposed CCSC loss effectively reduces the intra-class variance through forcing the whole network to extract cross camera invariant features, and it can be unified with identification loss in a multi-task manner. Extensive experiments implemented on the standard benchmark datasets including CUHK03, DukeMTMC-reid, Market-1501 and MSMT17 indicate that the proposed CCSC loss can bring a large performance boost on the strong baseline and it is also superior to other metric learning methods such as hard triplet loss and center loss. For instance, on the most challenging dataset CUHK03-Detect, Rank-1 accuracy and mAP are improved by 10.0% and 10.2% than the baseline respectively and simultaneously obtain a comparable performance with the state-of-the-art method.
Loading