Contrastive Pre-Training for Multimodal Medical Time SeriesDownload PDF

Published: 02 Dec 2022, Last Modified: 05 May 2023TS4H SpotlightReaders: Everyone
Keywords: multimodal, representation learning, pretraining, contrastive learning
TL;DR: We propose a contrastive learning pipeline for multimodal medical time series consisting of physiological signals, labs, and vitals signs, finding it obtains improved or competitive performance relative to baselines.
Abstract: Clinical time series data are highly rich and provide significant information about a patient's physiological state. However, these time series can be complex to model, particularly when they consist of multimodal data measured at different resolutions. Most existing methods to learn representations of these data consider only tabular time series (e.g., lab measurements and vitals signs), and do not naturally extend to modelling a full, multimodal time series. In this work, we propose a contrastive pre-training strategy to learn representations of multimodal time series. We consider a setting where the time series contains sequences of (1) high-frequency electrocardiograms and (2) structured data from labs and vitals. We outline a strategy to generate augmentations of these data for contrastive learning, building on recent work in representation learning for medical data. We evaluate our method on a real-world dataset, finding it obtains improved or competitive performance when compared to baselines on two downstream tasks.
0 Replies