Multimodal Learning for Human Action Recognition Via Bimodal/Multimodal Hybrid Centroid Canonical Correlation Analysis

Published: 01 Jan 2019, Last Modified: 27 Sept 2024IEEE Trans. Multim. 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we study the problem of human action recognition from multiple feature modalities. We propose bimodal hybrid centroid canonical correlation analysis (BHCCCA) and multimodal hybrid centroid canonical correlation analysis (MHCCCA) to learn the discriminative and informative shared space, by considering the correlation among different classes across two modalities (BHCCCA) and three or more modalities (MHCCCA). We then introduce a new human action recognition framework by using BHCCCA/MHCCCA for fusing different modalities (RGB, depth, skeleton, and accelerometer data). Performance evaluation on four publicly accessible data sets (MSR Action3D, UTD-MHAD, UTD-MHAD-Kinect V2, and Berkeley MHAD) demonstrated the effectiveness of the proposed framework.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview