SCALE-VLP: Soft-Weighted Contrastive Volumetric Vision–Language Pre-training with Spatial-Knowledge Semantics
Keywords: Medical Images Analysis, Volumetric representation learning
Abstract: Vision–language models (VLMs) have demonstrated strong cross-modal capabilities,
yet most work remains limited to 2D data and assumes binary supervision
(i.e., positive vs. negative pairs), overlooking the continuous and structured dependencies
present in volumetric data such as CT. Existing approaches often treat
volumetric scans as independent 2D slices, compromising spatial coherence and
underutilizing rich clinical semantics. We propose SCALE-VLP, a soft-weighted
contrastive vision-language pre-training framework that integrates (i) volumetric
spatial semantics to preserve anatomical structure and (ii) domain-aware,
knowledge-infused semantics (e.g., radiological ontologies) to guide alignment.
This yields structurally consistent and semantically grounded representations under
limited supervision, demonstrating strong cross-task transferability (retrieval,
report generation, and classification), and cross-domain generalizability with consistent
gains without further fine-tuning. In particular, compared to the previous
state of the art, SCALE-VLP achieves up to 4.3× higher top-1 CT–report retrieval,
improves abnormality classification by 10 points, and reaches ROUGE-L 0.44 and
BERT-F1 0.89 for report generation. Further, in zero-shot evaluation on an outof-
domain external dataset, we observe consistent gains, indicating the cross-task
and cross-domain generalization ability of SCALE-VLP.
Primary Area: applications to physical sciences (physics, chemistry, biology, etc.)
Submission Number: 15804
Loading