LGDiffGait: Local and Global Difference Learning for Gait Recognition with Silhouettes

26 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gait Recognition; Movement Difference Modeling; Temporal Modeling
TL;DR: LGDiffGait captures local and global movement differences in gait sequences for more accurate gait recognition.
Abstract: The subtle differences between consecutive frames of a gait video sequence are crucial for accurate gait identification, as they reflect the distinctive movement of various body parts during an individual’s walk. However, most existing methods often focus on capturing spatial-temporal features of entire gait sequences only, which results in the neglect of these nuances. To address the limitation, in this paper, we propose a new approach, named Local and Global Difference Learning for Gait Recognition with Silhouettes (LGDiffGait). Specifically, the differences within gait sequences are explicitly modeled at two levels: local window-level and global sequence-level. For the local window-level, we apply sliding windows along the temporal dimension to aggregate the window-level information, and the local movement is defined as the difference between pooled features of adjacent frames within each window. For the global sequence-level, global pooling across the entire sequence is employed, which is followed by subtraction to capture overall movement differences. Moreover, after difference feature learning, we develop a temporal alignment module to align these extracted local and global differences with the overall sequence dynamics, ensuring temporal consistency. By explicitly modeling these differences, LGDiffGait can capture the subtle movements of different body parts, enabling the extraction of more discriminative features. Our experimental results demonstrate that LGDiffGait achieves state-of-the-art performance on four publicly available datasets.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5762
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview