Keywords: Electroencephalogram (EEG) ; self-supervised learning ;emotion recognition ; multi-model online collaboration
Abstract: Electroencephalography (EEG)-based emotion recognition is critical for developing adaptive brain-computer interfaces, yet remains challenged by high inter-subject variability and consequent distribution drifts. While self-supervised learning offers a promising alternative to supervised approaches by leveraging unlabeled data, current methods often use offline transfer learning with calibration, which is insufficient for streaming EEG samples in online scenarios. To overcome this limitation, we introduce MMOC, a novel self-supervised framework with Multi-Model Online Collaboration (MMOC). For handling the varying input samples in the stream, MMOC proposes to activate the most suitable model from a candidate pool using a routing mechanism. This routing decision is guided by a hybrid reconstruction–contrastive performance, which comprehensively captures distribution drifts at both structural and semantic levels. Furthermore, each model is equipped with an online parameter update mechanism with model specialization and mutual assistance. This mechanism not only enhances inter-model differentiation and specialization but also facilitates collaborative adaptation among models via pseudo-label sharing, thereby improving robustness against evolving data distributions. Extensive experiments on SEED and Dreamer datasets demonstrate that MMOC outperforms the state-of-the-art works with 86.39\% $\pm$ 5.41 on SEED, and 69.37\% $\pm$ 6.13 (arousal) and 70.33\% $\pm$ 6.78 (valence) on Dreamer. This result confirms its strong resistance to the inter-subject variability problem. Our work offers a practical solution for handling real-world EEG emotion recognition.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 9124
Loading