Two-stream Graph Attention Convolutional for Video Action Recognition

Published: 01 Jan 2021, Last Modified: 13 Nov 2024BigDataSE 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph convolution network can efficiently obtain the spatial features in the task of human action recognition based on skeleton information, and HSGAC network incorporating graph attention convolution mechanism could obtain more abstract spatial features. However, HSGAC does not consider the spatial dynamic importance of joints. In this paper, we propose a two-stream graph attention convolutional network(Two-Stream GAC), which aims to fuse the static and dynamic feature relationships between joints. Two-Stream GAC first preprocesses the original human pose information dynamically, and use two HSGAC models to obtain the static feature relationship of the original pose information and the dynamic feature relationship of the preprocessed pose information respectively. The two network models are fused by weighted sum. The experimental results on the RDT and NTU-RGB+D datasets show that the method achieves better accuracy of action recognition.
Loading