Keywords: Contrastive learning, Sef-supervised pre-training, Electrocardiograms, Deep learning, Foundation model.
TL;DR: We present a foundation model, pre-trained on a large diverse electrocardiogram cohort, exploiting patient-based contrastive learning and temporal augmentations, to improve beyond existing pre-training approaches and supervised training benchmarks.
Abstract: Electrocardiograms ($ECGs$) capture the electrical activity of the heart, offering rich diagnostic and prognostic insights. Traditionally, electrocardiograms are interpreted by human experts, but deep learning is now encroaching on this domain and combining human-like intelligence with machine precision for a deeper insight. Self-supervised pretraining is essential for maximising the potential of scarce medical data. Applied to $ECGs$, patient-contrastive learning has shown promising results, by utilising the natural variations in the cardiac signals. In this study, we introduce **T**emporally **A**ugmented **P**atient **C**ontrastive **L**earning of **R**epresentations ($TA\text{-}PCLR$), a novel approach that incorporates temporal augmentations into a patient contrastive self-supervised foundation model. Trained on one of the largest diverse cohorts of more than six million unlabelled electrocardiograms from three continents, we demonstrate the efficacy of our approach and show its value as a feature extraction tool for small and medium-sized labeled datasets. We also validate the performance on an open-source external cohort, surpassing other pretraining approaches while outperforming an ensemble of fully supervised deep networks on some labels. Additionally, we conduct a detailed exploration of how the pretraining and labeled electrocardiogram dataset distributions impact supervised task performance.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9841
Loading