CNNs, RNNs and Transformers in human action recognition: a survey and a hybrid model

Published: 2025, Last Modified: 15 Jan 2026Artif. Intell. Rev. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Human action recognition (HAR) encompasses the task of monitoring human activities across various domains, including but not limited to medical, educational, entertainment, visual surveillance, video retrieval, and the identification of anomalous activities. Over the past decade, the field of HAR has witnessed substantial progress by leveraging convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to effectively extract and comprehend intricate information, thereby enhancing the overall performance of HAR systems. Recently, the domain of computer vision has witnessed the emergence of Vision Transformers (ViTs) as a potent solution. The efficacy of Transformer architecture has been validated beyond the confines of image analysis, extending their applicability to diverse video-related tasks. Notably, within this landscape, the research community has shown keen interest in HAR, acknowledging its manifold utility and widespread adoption across various domains. However, HAR remains a challenging task due to variations in human motion, occlusions, viewpoint differences, background clutter, and the need for efficient spatio-temporal feature extraction. Additionally, the trade-off between computational efficiency and recognition accuracy remains a significant obstacle, particularly with the adoption of deep learning models requiring extensive training data and resources. This article aims to present an encompassing survey that focuses on CNNs and the evolution of RNNs to ViTs given their importance in the domain of HAR. By conducting a thorough examination of existing literature and exploring emerging trends, this study undertakes a critical analysis and synthesis of the accumulated knowledge in this field. Additionally, it investigates the ongoing efforts to develop hybrid approaches. Following this direction, this article presents a novel hybrid model that seeks to integrate the inherent strengths of CNNs and ViTs.
Loading