Multimodal Learning for Human Action Recognition Via Bimodal/Multimodal Hybrid Centroid Canonical Correlation Analysis
Abstract: In this paper, we study the problem of human action recognition from multiple feature modalities. We propose bimodal hybrid centroid canonical correlation analysis (BHCCCA) and multimodal hybrid centroid canonical correlation analysis (MHCCCA) to learn the discriminative and informative shared space, by considering the correlation among different classes across two modalities (BHCCCA) and three or more modalities (MHCCCA). We then introduce a new human action recognition framework by using BHCCCA/MHCCCA for fusing different modalities (RGB, depth, skeleton, and accelerometer data). Performance evaluation on four publicly accessible data sets (MSR Action3D, UTD-MHAD, UTD-MHAD-Kinect V2, and Berkeley MHAD) demonstrated the effectiveness of the proposed framework.
Loading