Abstract: Gait recognition is a soft biotechnology to identify pedestrians observed from different camera views based on specific walking patterns. However, various dressing and wearing conditions bring great challenges to realistic gait recognition. Most existing methods take holistic gait silhouette as input and focus on local areas through horizontal strip division or attention map. We consider that this processing may contain mixed or incomplete information about multiple body parts so that gait information is misused or underutilized. In this paper, we propose a parsing-guided framework for gait recognition, named GaitParsing, which explores human semantic parsing to dissect human body into a set of specific and complete body parts. Correspondingly, a simple yet effective dual-branch feature extraction network is adopted to process holistic gait and distinct body parts. To maximize the use of highly discriminated gait frames, we propose a self-occlusion frame assessment to measure the self-occlusion in a gait sequence. Since there is no human parsing modality in current gait datasets, we further develop a general human parsing pipeline specifically tailored for gait datasets. This single training enables widespread application across various gait datasets. Extensive experiments with ablation analyses demonstrate competitive performance even in the most challenging conditions, e.g., Cloth-Changing (CC+5.9\%). Especially, It is gratifying to see that our model can be easily applied to existing methods and significantly outperform the original architecture, even without much modification. Code is available at https://github.com/wzb-bupt/GaitParsing.
0 Replies
Loading