Pretraining Sleep Staging Models without Patient Data

ICLR 2024 Workshop TS4H Submission17 Authors

Published: 08 Mar 2024, Last Modified: 12 Mar 2024TS4H OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: EEG, sleep staging, deep learning, pretraining, synthetic data
TL;DR: This paper introduces "frequency pretraining," a method that uses synthetic data for pretraining models to improve EEG-based sleep staging in situations with limited real-world data.
Abstract: Analyzing electroencephalographic (EEG) time series can be challenging, especially with deep neural networks, due to the large variability among human subjects and often small datasets. To address these challenges, various strategies, such as self-supervised learning, have been suggested, but they typically rely on extensive empirical datasets. Inspired by recent advances in computer vision, we propose a pretraining task termed “frequency pretraining” to pretrain a neural network for sleep staging by predicting the frequency content of randomly generated synthetic time series. Our experiments demonstrate that our method surpasses fully supervised learning in scenarios with limited data and few subjects, and matches its performance in regimes with many subjects. We anticipate that our approach will be advantageous across a broad spectrum of applications where EEG data is limited or derived from a small number of subjects, including the domain of brain-computer interfaces.
Submission Number: 17
Loading