Retrieval of spatial-temporal motion topics from 3D skeleton data

Published: 01 Jan 2019, Last Modified: 04 Jun 2024Vis. Comput. 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Retrieval of a specific human motion from 3D skeleton data is intractable because of its articulated complexity. We propose a context-based motion document formation method to reflect geometric variations by calculating covariance descriptors among skeletal joint locations and joint relative distances, and temporal variations by performing a coarse-to-fine segmentation on the motion sequence. The descriptors of query motion traverse all the motion categories to lock its motion words, which can be regarded as the basic units of a motion document. The discrete motion words of different spatiotemporal descriptors are also mapped to divergent index ranges to add prior knowledge of motion with temporal order to latent Dirichlet allocation (LDA). The similarity matching is based on motion-topic distributions from LDA with semantic meanings. The experiments on public datasets show the effectiveness and robustness of the proposed method over existing models.
Loading