Multimodal Emotion Recognition System Leveraging Decision Fusion with Acoustic and Visual Cues

Published: 2024, Last Modified: 08 Jan 2026ICPR (Workshops and Challenges, 4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multimodal emotion recognition (MER) involves detecting and understanding human emotions by analyzing multiple modalities, such as images, audio, videos, and texts. MER is a challenging problem due to the complexities of multiple modalities and fusing their information to interpret and classify human emotions accurately. This paper introduces an intelligent framework (MEmoR) for multimodal emotion recognition leveraging audio-visual fusion. It focuses on the challenging domain of emotion detection within a Bengali audio-visual dataset. A vital aspect of this work involves creating a new dataset, a multimodal emotion recognition dataset (MERD), tailored to specific task requirements. The MERD encompasses 1937 annotated multimodal data across four categories: happy, sad, angry, and neutral. The proposed framework utilizes various machine learning (ML), deep learning (DL), and transformer-based models for audio and visual modalities. This work explores and integrates audio and visual modalities through feature-level and decision-level fusion. .
Loading