Keywords: sEEG, neural decoding, self-supervision, transformer
Abstract: Intracranial neural recordings (e.g., stereo-ElectroEncephaloGraphy (sEEG)) have offered a unique window to measure neural signals across multiple brain regions simultaneously. Recent works have focused on developing neurofoundation models that learn generalizable representations across both subjects and tasks from such recordings. These models achieve exciting advances, yet overlook the modular functional organization of the brain, where neurons from multiple adjacent anatomical regions collectively support specific cognitive functions (e.g., the Wernicke area for speech perception). A key challenge remains how to effectively incorporate this functional contextual information into representation learning to improve both interpretability and decoding performance. To tackle this challenge, we propose a novel pre-training framework, BrainFCIR, that explicitly integrates functional context into model design via spatial-context-guided representation learning. We evaluate BrainFCIR on the publicly available sEEG speech-perception dataset. Extensive experiments show that BrainFCIR, as a unified representation learning framework for intracranial sEEG signals, significantly outperforms previous decoding methods. Overall, our work underscores the significance of functional context in developing more biologically plausible and high-performing neural decoding models.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 12943
Loading