SleepFM: Multi-modal Representation Learning for Sleep across ECG, EEG and Respiratory Signals

Published: 29 Feb 2024, Last Modified: 02 May 2024AAAI 2024 SSS on Clinical FMsEveryoneRevisionsBibTeXCC BY 4.0
Track: Traditional track
Keywords: Deep Learning, Foundation Models, Sleep Study, Apnea
TL;DR: SleepFM: Multi-modal Foundation Model for Sleep
Abstract: Sleep is a complex physiological process involving multiple modalities across the body. We curate a large dataset of simultaneous polysomnography (PSG) recordings comprising electrical brain activity (EEG), heart rhythms (ECG), and respiratory patterns from over 14,000 participants, totaling over 100,000 hours of sleep data. We develop SleepFM, the first multi-modal foundation model for sleep learned through contrastive learning on this highly heterogeneous physiological data. When evaluated on a held-out test set, SleepFM significantly improves retrieval performance over 500x over random chance. A logistic regression model trained on SleepFM's learned embeddings achieves strong performance on sleep stage classification (macro AUPRC 0.69) and apnea detection (AUPRC 0.71), outperforming an end-to-end trained CNN for sleep stage classification (AUPRC 0.579) and apnea detection (AUPRC 0.56). We find representations learned using an innovative leave-one-out approach during contrastive learning significantly improve downstream task performance compared to representations from standard pairwise contrastive learning. This work demonstrates the value of holistic multi-modal sleep modeling.
Presentation And Attendance Policy: I have read and agree with the symposium's policy on behalf of myself and my co-authors.
Ethics Board Approval: Yes, we have/will include(d) information about IRB approval or its equivalent, in the manuscript.
Data And Code Availability: No, we will not be making any data and/or code public.
Primary Area: Clinical foundation models
Student First Author: Yes, the primary author of the manuscript is a student.
Submission Number: 21
Loading