Unsupervised video-based action recognition using two-stream generative adversarial network

Published: 01 Jan 2024, Last Modified: 13 Nov 2024Neural Comput. Appl. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Video-based action recognition faces many challenges, such as complex and varied dynamic motion, spatio-temporal similar action factors, and manual labeling of archived videos over large datasets. How to extract discriminative spatio-temporal action features in videos with resisting the effect of similar factors in an unsupervised manner is pivotal. For that, this paper proposes an unsupervised video-based action recognition method, called two-stream generative adversarial network (TS-GAN), which comprehensively learns the static texture and dynamic motion information inherited in videos with taking the detail information and global information into account. Specifically, the extraction of the spatio-temporal information in videos is achieved by a two-stream GAN. Considering that proper attention to detail is capable of alleviating the influence of spatio-temporal similar factors to the network, a global-detailed layer is proposed to resist similar factors via fusing intermediate features (i.e., detailed action information) and high-level semantic features (i.e., global action information). It is worthwhile of mentioning that the proposed TS-GAN does not require complex pretext tasks or the construction of positive and negative sample pairs, compared with recent unsupervised video-based action recognition methods. Extensive experiments conducted on the UCF101 and HMDB51 datasets have demonstrated that the proposed TS-GAN is superior to multiple classical and state-of-the-art unsupervised action recognition methods.
Loading