HPRNet: Human Parsing Reconstruction With Non-Local Multi-Scale Perception Network for Cloth-Changing Person Re-Identification
Abstract: Cloth-changing Person Re-Identification (CC-ReID) is a challenging data modeling task that involves identifying specific pedestrians wearing different outfits. Existing methods primarily focus on altering clothing color and directly reconstructing appearance to extract features independent of the clothes. Real pedestrians differ in height, body shape, etc. Such methods are prone to losing the intrinsic information of the original sample (i.e., the person identity) owing to the absence of contextual phenomena (e.g., texture structure and local correlation), which decreases the recognition performance. To address this problem, we propose a framework called HPRNet, or “Human Parsing Reconstruction with Non-Local Multi-Scale Perception Network,” which includes a non-local weighted multi-scale perception (NWMP) module and a parsing reconstruction exploration (PRE) module. In particular, the proposed NWMP module effectively captures the global receptive field of a sample and obtains a contextual correlation between non-neighboring pixels within the sample image. The PRE module was used to achieve a more accurate reconstruction of human body components with a clothing parsing model to better distinguish features related to or unrelated to clothes. Extensive experiments were conducted on CC-ReID public datasets (LTCC, PRCC, and CCVID) to demonstrate the effectiveness and competitiveness of the proposed method with state-of-the-art (SOTA) baselines for this complex modeling task.
External IDs:dblp:journals/tcsv/XiongGHMBSYS26
Loading