Cross-modal Reflection Makes Med-VLMs Robust to Noisy User Prompts

ICLR 2026 Conference Submission12643 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Med-VLM, Clinical Prompts
TL;DR: Benchmark and method for Med-VLMs to enhance the robustness against noisy user prompt
Abstract: Medical vision-language models (Med-VLMs) offer a new and effective paradigm for digital health in tasks such as disease diagnosis using clinical images and text. In these tasks, an important but underexplored research question is how Med-VLMs interpret and respond to user-provided clinical information, especially when the prompts are noisy. For a systematic evaluation, we construct Med-CP, a large-scale visual question answering (VQA) benchmark designed to comprehensively evaluate the influence of clinical prompts across diverse modalities, anatomical regions, and diagnostic tasks. Our experiments reveal that existing Med-VLMs tend to follow user-provided prompts blindly, regardless of whether they are accurate or not, raising concerns about their reliability in real-world interactions. To address this problem, we introduce a novel supervised fine-tuning (SFT) approach for Med-VLMs based on cross-modal reflection across medical images and text. In our SFT method, the Med-VLM is trained to produce reasoning paths for the analysis of medical image and the user-provided prompt. Then, the final answer is determined by conducting a reflection on the visual and textual reasoning paths. Experimental results demonstrate that our method considerably enhances the robustness against noisy user-provided prompts for both in-domain and out-of-domain evaluation scenarios.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 12643
Loading