Towards Generalizable Multimodal ECG Representation Learning with LLM-extracted Clinical Entities

Published: 09 Jun 2025, Last Modified: 09 Jun 2025FMSD @ ICML 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: ECG Multimodal Learning, Large Language Model, Named Entity Recognition, Natural Language Processing Application, Artificial Intelligence, Machine Learning
TL;DR: Structured cardiac entity supervision enables a multimodal ECG-Text model to outperform prior methods on zero-shot cardiac diagnosis.
Abstract: Electrocardiogram (ECG) recordings are essential for cardiac diagnostics but require large-scale annotation for supervised learning. In this work, we propose a supervised pre-training framework for multimodal ECG representation learning that leverages Large Language Model (LLM) based clinical entity extraction from ECG reports to build structured cardiac queries. By fusing ECG signals with standardized queries rather than categorical labels, our model enables zero-shot classification of unseen conditions. Experiments on six downstream datasets demonstrate competitive zero-shot AUC of 77.20\%, outperforming state-of-the-art self-supervised and multimodal baselines by 4.98\%. Our findings suggest that incorporating structured clinical knowledge via LLM-extracted entities leads to more semantically aligned and generalizable ECG representations than typical contrastive or generative objectives.
Submission Number: 44
Loading