FRTANet: Feature Reconstruction and Twofold Attention for Occluded Person Re-Identification

Published: 01 Jan 2024, Last Modified: 01 Mar 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Occluded person re-identification (Occluded ReID) aims to match occluded images with holistic images and determine the identity of the occluded pedestrian. Effective pedestrian local features play an important role in Occluded ReID. Previous methods usually use pose estimation or human parsing labels to mine pedestrian body part features, but these methods do not perform well in the face of complex occlusion. In this paper, we propose a novel feature reconstruction and twofold attention network (FRTANet) that does not rely on any auxiliary model to address the occlusion problem. Specifically, we design a local branch and a global branch in FRTANet to extract discriminative pedestrian features from different layers of the network. In the local branch, a feature reconstruction module (FRM) is designed to enhance the generalization and robustness of the model. Meanwhile, in the global branch, we introduce a twofold attention module including multi-channel attention module (MCAM) and multi-positon attention module (MPAM) to help the network focus on the pedestrian’s visible body. Furthermore, we build rich pedestrian features through the fusion of global and local features. Experimental results on two occluded datasets (Occluded-DukeMTMC, Occluded-REID) and two holistic datasets (Market1501, DukeMTMC) demonstrate that our approach achieves state-of-the-art performance.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview