Unified 3D MRI Representations via Sequence-Invariant Contrastive Learning

16 Jan 2025 (modified: 03 Feb 2025)MIDL 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: mri, self-supervised learning, synthetic data, data augmentation, contrastive learning
TL;DR: Using physics-based qMRI simulations, we train a sequence-invariant self-supervised model for 3D MRI that significantly improves robustness and performance across diverse tasks and acquisition protocols.
Abstract: Self-supervised deep learning has accelerated 2D natural image analysis but remains difficult to translate into 3D MRI, where data are scarce and pre-trained 2D backbones cannot capture volumetric context. We present a sequence-invariant self-supervised framework leveraging quantitative MRI (qMRI). By simulating multiple MRI contrasts from a single 3D qMRI scan and enforcing consistent representations across these contrasts, we learn anatomy-centric rather than sequence-specific features. This yields a robust 3D encoder that performs strongly across varied tasks and protocols. Experiments on healthy brain segmentation (IXI), stroke lesion segmentation (ARC), and MRI denoising show significant gains over baseline SSL approaches, especially in low-data settings (up to +8.3% Dice, +4.2 dB PSNR). Our model also generalises effectively to unseen sites, demonstrating potential for more scalable and clinically reliable volumetric analysis. All code and trained models are publicly available.
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Transfer Learning and Domain Adaptation
Paper Type: Methodological Development
Registration Requirement: Yes
Reproducibility: https://github.com/liamchalcroft/contrast-squared
Visa & Travel: Yes
Submission Number: 125
Loading