SleepFM: Foundation Model for Sleep Analysis

ICLR 2024 Workshop TS4H Submission3 Authors

Published: 08 Mar 2024, Last Modified: 29 Mar 2024TS4H PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning, Deep Learning, Foundation Model, Sleep Study, Healthcare Application
TL;DR: We introduce SleepFM, a sleep foundation model trained using contrastive learning on a multi-modal sleep dataset comprising over 100,000 hours of sleep monitoring data from over 14,000 participants at [anonymous] hospital.
Abstract: Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities. We curate a large polysomnography dataset from over 14,000 participants comprising over 100,000 hours of sleep recordings. Leveraging this extensive dataset, we developed SleepFM, the first multi-modal foundation model for sleep analysis. We show that a novel leave-one-out contrastive learning significantly improves downstream task performance compared to standard pairwise contrastive learning. A logistic regression model trained on SleepFM learned embeddings outperforms an end-to-end trained convolutional neural network (CNN) on sleep stage classification (macro AUROC 0.88 vs 0.72 and macro AUPRC 0.72 vs 0.48) and sleep disordered breathing detection (AUROC 0.85 vs 0.69 and AUPRC 0.77 vs 0.61). Notably, the learned embeddings achieve 48\% top-1 average accuracy in retrieving modality clip pairs from 90,000 candidates. This work demonstrates the value of holistic multi-modal sleep modeling to fully capture the richness of sleep recordings.
Submission Number: 3
Loading