Large EEG Foundation Model Learns Informative Low-Frequency Representations from Intracranial Brain Signals

Published: 01 Mar 2026, Last Modified: 25 Mar 2026ICLR 2026 TSALM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Presentation Attendance: Yes, we will present in-person
Keywords: Brain signals, Cross-modality, EEG foundation model, ECoG, Time-series
TL;DR: Adapted EEG foundation model outperforms conventional ECoG decoders using low-frequency information.
Abstract: Foundation models (FMs) for Electroencephalography (EEG) time-series have recently emerged as powerful tools, leveraging large-scale non-invasive datasets to learn robust, universal spatiotemporal neural representations. The question is whether these models can bridge the gap towards invasive recording modalities, such as Electrocorticography (ECoG), which is hailed for providing superior signal-to-noise ratios but with limited spatial coverage and small patient cohorts. In this work, we investigate the transferability of EEG-FMs to ECoG-based decoding tasks. We propose an ECoG-to-EEG channel projection module and a lightweight adaptation strategy to effectively update the pretrained FM backbone. Benchmarking results demonstrate that our adapted EEG-FM outperforms conventional ECoG decoders in extracting finger movement-related information from low-frequency sampled ECoG signals (i.e., a sampling rate of 128 Hz). These findings establish a new paradigm for intracranial brain-computer interfaces, suggesting that cross-modality knowledge can be leveraged to improve neural decoding of finger movements.
Track: Research Track (max 4 pages)
Submission Number: 46
Loading