Presentation Attendance: Yes, we will present in-person
Keywords: Time series foundation model, EEG foundation model, EEG Classification, benchmark
TL;DR: This paper benchmarks generic time-series foundation models (Mantis and MOMENT) on four public EEG classification datasets showing that fine-tuning can be competitive with EEG-specific models while linear probing transfers poorly
Abstract: Generic time-series foundation models (TS FMs) are now trained on large-scale collections of heterogeneous time series, but whether this broad pre-training actually helps in specialized domains remains an open question. We take EEG classification as a case study, benchmarking two generic TS FMs-Mantis ($\sim$8M parameters) and MOMENT ($>$300M)-on four public EEG datasets (TUEV, FACED, BCI-IV-2A, and Error) under linear probing and fine-tuning, and comparing them against EEG foundation models and classic neural baselines. Fine-tuning often matches EEG-specific models, while linear probing transfers poorly. Interestingly, randomly initialized Mantis performs comparably to its pre-trained version, suggesting that its architecture, rather than pre-training, may be driving much of its performance. These results illustrate both the promise and the limits of generic TS FMs for specialized domains.
Track: Research Track (max 4 pages)
Submission Number: 48
Loading