Abstract: In consumer electronics, affective computing allows devices to recognize and respond to the emotional states of individuals, thereby enhancing personalization and satisfaction. Electroencephalogram (EEG)-based emotion recognition methods have garnered attention for their capability to provide real-time, objective insights into emotional and cognitive states. Existing research mainly focuses on extracting temporal and spectral features from EEG signals for emotion recognition. However, electrode shifts may occur across different experiments during data acquisition, resulting in models with limited robustness and constrained classification performance. To address this challenge, this paper introduces a Brain Region Knowledge based Dual-Stream network (BRKDSnet) for emotion recognition. It employs a dual-stream architecture to effectively fuse the temporal, spectral, and spatial features of EEG signals. By incorporating prior brain region partition knowledge, it efficiently aggregates channel features into distinct brain region features. Such a brain region based partition mitigates noise issues caused by electrode shifts during user wearing in consumer electronic applications, and leading to learn more robust feature representations. The experiments conducted on the widely used emotion recognition datasets SEED and SEED-IV shows that BRKDSnet achieves state-of-the-art results, with accuracies of 97.81% (improved by 0.64%-3.57% compared to existing works) and 90.76% (improved by 3.13%-11.39%) respectively. Finally, through analysis of the visualization results, it is found that EEG channels located in the temporal lobe contributed the most to emotion recognition classification, which also provides the potential for future applications in wearable devices.
External IDs:dblp:journals/tce/LinXLWWL25
Loading