Adaptive Occlusion-Aware Network for Occluded Person Re-Identification

Published: 2025, Last Modified: 05 Nov 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Occluded person re-identification (ReID) is a challenging task due to some of the essential features are interfered by obstacles or other pedestrians. Multi-granularity local feature extraction and recognition can effectively improve the accuracy of ReID under occlusion. However, manual segmentation methods for local features can lead to feature misalignment. Feature alignment based on pose estimation often ignores non-body details (e.g., handbags, backpacks, etc.) while increasing the complexity of the model. To address the above challenges, we propose a novel Adaptive Occlusion-Aware Network (AOANet), which mainly consists of two modules, the Adaptive Position Extractor (APE) and the Occlusion Awareness Module (OAM). In order to adaptively extract distinguishing features of body parts, APE optimizes the representation of multi-granularity features by the guidance of attention mechanism and keypoint features. To further perceive the occluded region, the OAM is developed by adaptively calculating the occlusion weights for body parts. These weights can lead to highlighting the non-occluded parts and suppressing the occluded parts, which in turn improves the accuracy in the occluded situation. Extensive experimental results confirm the advantages of our method on the MSMT17, DukeMTMC-reID, Market-1501, Occluded-Duke and Occluded-ReID datasets. The comparative results demonstrate that our method outperforms comparable methods. Especially on the Occluded-Duke dataset, our method achieved 70.6% mAP and 81.2% Rank-1 accuracy.
Loading