CA-SER: Cross-Attention Feature Fusion for Speech Emotion Recognition

Published: 01 Jan 2024, Last Modified: 19 Feb 2025ECAI 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we introduce a novel tool for speech emotion recognition, CA-SER, that borrows self-supervised learning to extract semantic speech representations from a pre-trained wav2vec 2.0 model and combine them with spectral audio features to improve speech emotion recognition. Our approach involves a self-attention encoder on MFCC features to capture meaningful patterns in audio sequences. These MFCC features are combined with high-level representations using a multi-head cross-attention mechanism. Evaluation of speech emotion recognition on the IEMOCAP dataset shows that our system achieves a weighted accuracy of 74.6%, outperforming most existing techniques.
Loading