Leveraging the Structure of Medical Data for Improved Representation Learning

ICML 2025 Workshop FM4LS Submission38 Authors

Published: 12 Jul 2025, Last Modified: 12 Jul 2025FM4LS 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: self-supervised learning, masked autoencoder, multi-view contrastive learning, medical imaging, chest X-ray, MIMIC-CXR, representation learning, domain-specific pretraining
TL;DR: We propose a multi-view-regularized masked autoencoder that learns from paired frontal-lateral chest X-rays without text labels, enabling data-efficient medical representation learning.
Abstract: Building generalizable medical AI systems requires pretraining strategies that are data-efficient and domain-aware. Unlike internet-scale corpora, clinical datasets such as MIMIC-CXR offer limited image counts and scarce annotations, but exhibit rich internal structure through multi-view imaging. We propose a self-supervised framework that leverages the inherent structure of medical datasets. Specifically, we treat paired chest X-rays (i.e., frontal and lateral views) as natural positive pairs, learning to reconstruct each view from sparse patches while aligning their latent embeddings. Our method requires no textual supervision and produces informative representations. Evaluated on MIMIC-CXR, we show strong performance compared to supervised objectives and baselines being trained without leveraging structure. This work provides a lightweight, modality-agnostic blueprint for domain-specific pretraining where data is structured but scarce.
Submission Number: 38
Loading