Presentation Attendance: Yes, we will present in-person
Keywords: EEG, Self-Supervised Learning, EEG foundation model
TL;DR: A lean and balanced self-supervised learning pipeline for EEG representations, yielding up to a 6% linear probing performance gap over SOTA foundation models.
Abstract: Electroencephalography (EEG) is critical for neurological diagnosis but suffers from low SNR and subject variability. Current foundation models relying on raw signal reconstruction often overfit to local noise. We propose STELAR, a foundation model with a dual-space objective combining patch-level masked latent prediction for semantic stability with masked reconstruction for raw signal fidelity. To balance these objectives, we introduce MTPE-GB, a validation-driven gradient balancer that adaptively weights tasks without manual tuning or computational overhead. STELAR achieves state-of-the-art linear probing performance across diverse EEG benchmarks, demonstrating robust generalization. All source code will be released.
Track: Research Track (max 4 pages)
Submission Number: 44
Loading