Keywords: Foundation Models, Large Brainwave Foundation Models, Brain-Computer Interface (BCI), Electroencephalogram (EEG), Contrastive Learning
Abstract: Foundation models are beginning to reshape EEG representation learning, but existing approaches remain dominated by self-supervised reconstruction objectives. In this work, we introduce the first subject-aware contrastive EEG foundation model, leveraging subject identity as a natural supervisory signal. Building on a patch-based architecture inspired by recent Large Brainwave Foundation Models (LBMs), we pretrain a lightweight transformer encoder using contrastive learning, where positive pairs are drawn from different segments and sessions of the same subject. Unlike contrastive foundation models in other domains, which depend on augmentations to construct positive samples, our method relies on naturally occurring intra-subject variability across EEG sessions. We evaluate the model through both representation metrics (alignment, uniformity and smooth effective rank) and downstream tasks (under linear probing and full fine-tuning). Results show that our model produces well-structured representation spaces, achieving strong representation quality and competitive performance compared to other LBMs.
Submission Number: 90
Loading