PaPaGei: Open Foundation Models for Optical Physiological Signals

Published: 10 Oct 2024, Last Modified: 26 Nov 2024NeurIPS 2024 TSALM Workshop OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: photoplethysmography, time series, foundation, representation learning, self-supervised learning
TL;DR: We develop an open foundation model for photoplethysmography that can applied to a variety to downstream medical tasks. We propose a self-supervised learning method based on PPG signal morphology.
Abstract:

Photoplethysmography (PPG) is the most widely used non-invasive technique for monitoring biosignals and cardiovascular health, with applications in both clinical settings and consumer health through wearable devices. However, most models applied to PPG data are task-specific and lack generalizability. Limited previous works often used single-device datasets, did not explore out-of-domain generalization, or did not release their models, hindering open research. Here, we introduce PaPaGei, the first open foundation model for PPG signals. Pre-trained on more than 57,000 hours of 20 million unlabeled PPG signals using publicly available datasets exclusively, PaPaGei is evaluated against popular time-series foundation models and other benchmarks on 18 diverse tasks spanning cardiovascular health, sleep disorders, pregnancy monitoring, and wellbeing assessment. PaPaGei's architecture incorporates a novel representation learning approach that examines differences in PPG signal morphology across individuals, enabling it to capture rich representations. Across 18 clinically-relevant classification and regression tasks, PaPaGei outperforms baselines in 13, resulting in an average improvement of 6.3% and 2.9%, respectively. Notably, it can be used out of the box as both a feature extractor and an encoder for other multimodal models, opening up new opportunities for multimodal health monitoring

Submission Number: 87
Loading