Learning Discriminative and Robust Representations for UAV-View Skeleton-Based Action Recognition

Published: 01 Jan 2024, Last Modified: 09 Nov 2024ICME Workshops 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Skeleton-based human action recognition has attracted much attention recently, which is a crucial topic for human action understanding. While many endeavors have been made for skeleton-based action recognition from laboratory, the performances of these models suffer from data degradation caused by various factors, e.g., diverse viewpoints and object occlusion in the real world. This work focuses on the challenging Unmanned Aerial Vehicle (UAV)-view which is more aligned with real-world scenarios, and we propose a simple yet effective framework to Learn discriminative and robust Representations for UAV-view skeleton-based action recognition (LRU). Experiments under the challenging large-scale UAV dataset, UAV-Human, demonstrate the effectiveness of our method, surpassing the state-of-the-art methods by 1.62% and 6.11% under the cross-subject-v1 and cross-subject-v2 protocols, respectively.
Loading