Contrastive Patient-level Pretraining Enables Longitudinal and Multimodal Fusion for Lung Cancer Risk Prediction
Keywords: contrastive language-image pretraining (CILP), multimodal, chest CT, lung cancer
TL;DR: contrastive pretraining enhances the fusion of longitudinal and multimodal medical data without requiring semantically paired modalities or additional training examples.
Abstract: Leveraging longitudinal and multimodal data is important for clinical predictive tasks. Contrastive language-image pretraining (CLIP) has been successful in learning multimodal representations by aligning paired images and captions, i.e. medical images and corresponding radiology report. However, in real clinical settings, the alignment of unpaired modalities, such as medical images and clinical notes collected at different times, is an open challenge, even though such data are ubiquitous in practice. This study conducts contrastive pretraining between longitudinal chest CTs and clinical variables on the patient level using a large public lung cancer screening dataset. Leveraging a time-distanced transformer to encode longitudinal imaging and an open-source text embedding to encode clinical variables, we optimize contrastive loss between the embedded modalities from same patient (positive pair) against those from different patients (negative pair). We find that finetuning the CLIP representation significantly improves prediction of lung cancer risk in two types of clinical populations (0.895 and 0.893 AUC) compared to conventional multimodal fusion (0.873 and 0.875 AUC) and single modality baselines. These results demonstrate how contrastive patient-level pretraining can enable longitudinal and multimodal fusion without additional training data. We released our code and pre-trained weights at https://github.com/MASILab/lung-cplp.
Primary Subject Area: Integration of Imaging and Clinical Data
Secondary Subject Area: Unsupervised Learning and Representation Learning
Paper Type: Validation or Application
Registration Requirement: Yes
Reproducibility: https://github.com/MASILab/lung-cplp
Submission Number: 240
Loading