Cross-Domain Adaptation of a Chest CT Vision-Language Model for Intracranial Hemorrhage Detection

15 Apr 2026 (modified: 16 Apr 2026)MIDL 2026 Short Papers SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision-Language Model, Domain Adaptation, Intracranial Hemorrhage, CT
TL;DR: Adapting a chest CT vision-language model to brain CT with minimal supervision enables effective intracranial hemorrhage detection, with preprocessing window choice proving as impactful as the adaptation strategy itself.
Registration Requirement: Yes
Abstract: Pre-trained medical vision-language models (VLMs) offer strong representational capacity, yet their transferability across anatomically distinct CT domains remains underexplored. We investigate cross-domain adaptation of CT-CLIP, a VLM pre-trained on chest CT-report pairs, for intracranial hemorrhage detection in brain CT using the CQ500 dataset. We compare three preprocessing window settings (brain, subdural, and bone) and evaluate a spectrum of adaptation strategies: zero-shot inference, vocabulary fine-tuning, linear probing, LoRA, partial fine-tuning, and few-shot learning. Performance is assessed via AUROC and average precision, with F1-score reported using Youden's J-based thresholds. Zero-shot transfer yields near-chance performance (AUROC 0.48--0.55), confirming a substantial domain gap, while supervised adaptation yields consistent gains with Partial Fine-Tuning achieving the highest AUROC (0.736). In the few-shot setting, classifier-based methods with VocabFine outperform retrieval-based approaches, with performance scaling steadily with shot count. Preprocessing window choice critically influences cross-domain performance, suggesting it should be treated as a core adaptation decision rather than a fixed implementation detail. These findings offer practical guidance for deploying pre-trained medical VLMs beyond their original training domain.
Visa & Travel: No
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 86
Loading