Joint feature augmentation and posture label for cloth-changing person re-identification

Published: 01 Jan 2025, Last Modified: 11 Apr 2025Multim. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Cloth-changing person re-identification (CC-ReID) is to retrieve the target person who changes clothes over a long period crossing multiple non-overlapping cameras. The key to solving this problem is to capture cloth-irrelevant and identity-related features. Current researches mainly employ multi-modal methods to extract discriminative features from person images, however, these methods often perform poorly when the person’s appearance dramatically changes. To cope with the feature instability caused by appearance changes in CC-ReID, we propose a joint feature augmentation and posture label for cloth-changing person re-identification method for high performance in CC-ReID. Specifically, we design a two-branch structure that combines global feature enhancement and posture guidance to capture features corresponding to the identity itself. For augmenting the global features, we add camera and cloth embedding into the vision transformer. By incorporating these auxiliary information, more cues can be mined from multiple views. In addition, we also introduce a Random Dual Mask strategy to weaken part information in selected continuous patches, which aims to mask the clothing region. To further facilitate learning cloth-irrelevant features, pose-informed alignment loss is employed in the posture branch. We aggregate and update the extracted body key points to obtain more closely associated posture labels. Extensive experiments conducted on four CC-ReID datasets demonstrate the superiority of our proposed method.
Loading