Abstract: Gait recognition plays a vital role in biometric applications by analyzing the unique characteristics of an individ-ual's walking pattern. Methods based on 2D representations, such as silhouettes and skeletons, are increasingly being developed to learn the shape features and joint dy-namic movements. Nevertheless, the effectiveness of 2D representation-based methods is impeded by factors such as changes in viewpoint, partial occlusion, and noisy en-vironments. 3D representation-based methods can complement 2D representation-based approaches by providing more precise dynamic body shapes and motion information, along with increased robustness against changes in viewpoint and partial occlusion. However, the complex-ity of acquiring accurate 3D representations and the chal-lenges associated with extracting dynamic topological features from sequences of 3D representations hinder the de-velopment of 3D representations-based methods. In this pa-per, we present VM-Gait, a novel multi-modal gait recognition framework that harnesses the advantages of integrating both 2D and 3D representations. Furthermore, we in-troduce a new 3D representation, Virtual Marker, into gait recognition to efficiently learn topological features from 3D representations, avoiding the computational complexi-ties inherent in directly learning from 3D representations like 3D meshes or 3D point clouds. Extensive experiments demonstrate that the proposed framework effectively learns and fuses discriminative information from different gait modalities, enhancing gait recognition performance.
Loading