LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis

Published: 09 Jun 2025, Last Modified: 09 Jun 2025FMSD @ ICML 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Electroencephalography (EEG), Biosignal Processing, Transformers, Self-Supervised Learning, Foundation Models, Representation Learning
TL;DR: This paper introduces a computationally efficient foundation model that achieves topology-invariant EEG analysis by projecting signals from diverse electrode configurations onto a unified latent space.
Abstract: Electroencephalography (EEG) offers a non-invasive lens into human brain activity, but building large‐scale models is hampered by $\textit{topological heterogeneity}$: each public EEG data defines its own electrode layout, limiting generalization. We introduce $\textbf{LUNA}$ ($\textbf{L}$atent $\textbf{U}$nified $\textbf{N}$etwork $\textbf{A}$rchitecture), a self-supervised foundation model that reconciles disparate electrode geometries while scaling linearly---not quadratically---with channel count. Pre-trained on TUEG and Siena ($>$ 21,000 hours of raw EEG across diverse montages) using a masked signal reconstruction task, LUNA transfers effectively to four downstream tasks: abnormality detection, artifact detection, slowing classification, and emotion recognition. It demonstrates competitive performance across several benchmarks, achieving state-of-the-art results on TUAR and TUSL, e.g., $\textbf{0.921 AUROC}$ on TUAR, while reducing FLOPs by $\textbf{300}$$\times$ and GPU memory use by up to $\textbf{10}$$\times$. Code and pre-trained models will be released upon publication.
Submission Number: 64
Loading