Context-Aware Relative Distinctive Feature Learning for Person Re-identification

Published: 01 Jan 2024, Last Modified: 06 Nov 2025ICIC (8) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the context of large-scale crowd monitoring, the presence of visually similar person significantly increases the complexity of person re-identification tasks. Predominantly, current research concentrates on two aspects: fine-grained feature learning and hard example mining. However, these approaches present noticeable shortcomings. The method of fine-grained feature learning does not sufficiently account for the relativity of distinct features, indicating that the distinguishing features used when differentiating an individual from a different person may vary. The commonly used Triplet Loss necessitates maintaining a substantial margin in the feature space among visually similar local features of different identities. This, however, contradicts the principle of visual consistency, which states that similar inputs to a neural network should yield closely aligned feature maps in the feature space. Such a contradiction may result in models grappling with fitting these samples accurately. To overcome these limitations, we propose a Context-Aware Relative Distinctive Feature Learning methodology for Person Re-Identification. Our model incorporates the Exploring Relative Discriminative Regions with Contextual Awareness Module and the Visually Consistent N-tuple Loss, each specifically designed to address the aforementioned challenges. Experimental findings from several commonly utilized person re-identification datasets support the effectiveness of our approach.
Loading