Adapting a Generative Pretrained Transformer Achieves SOTA Performance in Assessing Diverse Physiological Functions Using Only Photoplethysmography Signals: A GPT-PPG Approach

Published: 29 Feb 2024, Last Modified: 02 May 2024AAAI 2024 SSS on Clinical FMsEveryoneRevisionsBibTeXCC BY 4.0
Track: Traditional track
Keywords: Photoplethysmography, clinical foundation model, Generative Pretrained Transformer
Abstract: This study introduces a novel application of a Genera-tive Pre-trained Transformer (GPT) model tailored for photoplethysmography (PPG) signals, serving as a foun-dation model for various downstream tasks. Adapting the standard GPT architecture to suit the continuous characteristics of PPG signals, our approach demon-strates promising results. After pre-training on our exten-sive dataset that contains more than 200 million 30s PPG samples, the model shows performance comparable to or surpassing current state-of-the-art (SOTA) methods in tasks like heart rate estimation. A standout feature of our GPT model is its inherent capability to perform gen-erative tasks such as signal denoising effectively, with-out the need for further finetuning. This success is at-tributed to the generative nature of the GPT framework. Looking ahead, we aim to further explore its generative abilities and investigate its implication on its other downstream tasks.
Presentation And Attendance Policy: I have read and agree with the symposium's policy on behalf of myself and my co-authors.
Ethics Board Approval: No, our research does not involve datasets that need IRB approval or its equivalent.
Data And Code Availability: Yes, we will make data and code available upon acceptance.
Primary Area: Clinical foundation models
Student First Author: Yes, the primary author of the manuscript is a student.
Submission Number: 34
Loading