mmGaitSet: multimodal based gait recognition for countering carrying and clothing changesDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023Appl. Intell. 2022Readers: Everyone
Abstract: This paper studies robust gait features against pedestrian carrying and clothing condition changes. Inspired by the fact that humans pay more attention to pose details based on part movements when completing a gait recognition task, we introduce pose information into the convolutional network without complex computation of human modeling. We construct a multimodal set-based deep convolutional network (mmGaitSet). The mmGaitSet consists of two independent feature extractors which extract the body features from silhouettes and the part features from pose heatmaps, respectively. Joint training of two feature extractors make them complement each other. We combine intra-modal fusion and inter-modal fusion into the network. The intra-modal fusion integrates the low-level structural features and high-level semantic features, to improve the discrimination of single modality features. The inter-modal fusion fully aggregates the complementary information between different modalities to enhance the pedestrian gait presentation. The state-of-the-art results are achieved on the challenging CASIA-B dataset outperforming recent competing methods, with up to 92.5% and 80.3% average rank-1 accuracies under bag-carrying and coat-wearing walking conditions, respectively.
0 Replies

Loading