RFFR-Net: Robust feature fusion and reconstruction network for clothing-change person re-identification
Abstract: In the research field of person re-identification (ReID), especially in clothing-change scenarios (CC-ReID), traditional approaches are hindered by their reliance on clothing features, which are inherently unstable, leading to a significant decline in recognition accuracy when confronted with variations in clothes. To address these problems, this study proposes an innovative framework, the Robust Feature Fusion and Reconstruction Network for Clothing-Change Person ReID (RFFR-Net), which significantly improves the model’s capability of processing non-clothing features (e.g., face, body shape) by incorporating the advanced Feature Attention Module (FAM) and Advanced Attention Module (AAM). In addition, the structure of the generative model of RFFR-Net is optimized by introducing the Refined Feature Reconstruction Module (RFRM), which effectively enhances the performance of feature extraction and processing, thus significantly enhancing the quality of the image reconstruction and the accuracy of the detailed representation. Experiments on three CC-ReID datasets show that our proposed method achieves an improvement of approximately 1.5% in mAP and CMC over the latest methods. In most cases, our method ranks within the top three across these evaluations. The results confirm the potential application of our RFFR-Net in person re-identification techniques and demonstrate its robustness and efficiency in the face of clothing changes.
Loading