MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting
Abstract: Radiologic diagnostic errors—under-reading errors, inattentional blindness, and
communication failures—remain prevalent in clinical practice. These issues often
stem from missed localized abnormalities, limited global context, and variability in
report language. These challenges are amplified in 3D imaging, where clinicians
must examine hundreds of slices per scan. Addressing them requires systems with
precise localized detection, global volume-level reasoning, and semantically consistent natural language reporting. However, existing 3D vision-language models are
unable to meet all three needs jointly—lacking local-global understanding for spatial reasoning and struggling with the variability and noise of uncurated radiology
reports. We present MedVista3D, a multi-scale semantic-enriched vision-language
pretraining framework for 3D CT analysis. To enable joint disease detection and
holistic interpretation, MedVista3D performs local and global image-text alignment
for fine-grained representation learning within full-volume context. To address
report variability, we apply language model rewrites and introduce a Radiology
Semantic Matching Bank for semantics-aware alignment. MedVista3D achieves
state-of-the-art performance on zero-shot disease classification, report retrieval, and
medical visual question answering, while transferring well to organ segmentation
and prognosis prediction. Code and datasets will be released.
Loading