Mamba-MHAR: An efficient multimodal framework for human action recognition

Trung-Hieu Le, Thai Khanh Nguyen, Tuan-Anh Le, Mathieu Delalandre, Kien Tran Trung, Thanh-Hai Tran, Cuong Pham

Published: 27 Sept 2025, Last Modified: 06 Nov 2025Journal of Computer Science and CyberneticsEveryoneRevisionsCC BY-SA 4.0
Abstract: Human Action Recognition (HAR) has emerged as an active research domain in recent years with wide-ranging applications in healthcare monitoring, smart home systems, and hu- man–robot interaction. This paper introduces a method, namely Mamba-MHAR (Mamba based Multimodal Human Action Recognition), a lightweight multimodal architecture aimed at improv- ing HAR performance by effectively integrating data from inertial sensors and egocentric videos. Mamba-MHAR consists of double Mamba-based branches, one for visual feature extraction - VideoMamba, and the other for motion feature extraction - MAMC. Both branches are built upon recently introduced Selective State Space Models (SSMs) to optimize the computational cost, and they are lately fused for final human activity classification. Mamba-MHAR achieves significant efficiency gains in terms of GPU usage, making it highly suitable for real-time deployment on edge and mobile devices. Extensive experiments were conducted on two challenging multimodal datasets UESTC-MMEA-CL and MuWiGes, which contain synchronized IMU and video data recorded in natural settings. The proposed Mamba-MHAR achieves 98.00% accuracy on UESTC-MMEA-CL and 98.58% on MuWiGes, surpassing state-of-the-art baselines. These results demonstrate that a simple yet efficient fusion of multimodal lightweight Mamba-based models provides a promising solution for scalable and low-power applications in pervasive computing environments.
Loading