Exploring Emotion Recognition with a Multi-Scale fNIRS Dataset: A Novel Approach Integrating Statistical Information and Cross-Channel Attention

Published: 01 Jan 2024, Last Modified: 13 Nov 2024CMLDS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we developed an innovative functional near-infrared spectroscopy (fNIRS) multi-scale emotion-labeled dataset, encompassing synchronous fNIRS data from 20 subjects watching 20 emotional videos, along with corresponding labels for arousal, valence, emotion categories, and intensity of emotion. These labels closely mimic the emotional experiences of humans in the real world, enhancing the dataset's complexity and applicability. Through data analysis, machine learning, and deep learning techniques, we affirmed the dataset's validity. Our study provides baseline results for cross-subject and subject-specific emotion classification tasks and additionally explores the classification of extreme emotions. Furthermore, we designed a novel deep learning model that employs a cross-channel attention mechanism to capture interactions between brain regions and effectively integrates statistical information from time series. Across numerous classification tasks, this model demonstrated superior performance over existing models. We further tested this model on other publicly available datasets, where it outperformed existing baseline results, proving its generalizability.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview