Keywords: Unsupervised Learning, Contrastive Learning, Medical Multimodality
TL;DR: We suggest a novel zero-shot classification evaluation method in medical vision-language models, along with a visual entailment-based contrastive learning method, achieving state-of-the-art performance in downstream tasks.
Abstract: In recent years, contrastive learning techniques have achieved significant success and have been widely applied in both general and medical domains. In the general domain, image captions typically describe only objects present in the image. However, in the medical field, radiology reports contain both sentences confirming the presence of diseases or abnormalities (positive mentions) and sentences explicitly ruling them out (negative mentions). Current vision-language pretraining models in the medical domain often overlook this critical distinction in both model evaluation (e.g., zero-shot classification) and training processes.
In this paper, we suggest adding a zero-shot classification evaluation method. Unlike previous approaches that only assess the semantic similarity between medical images and positive mentions of different disease categories, this method evaluates the model’s ability to distinguish between medical images and both positive and negative mentions of given disease category. Furthermore, to better capture the complex semantic relationships between medical images and the corresponding radiology reports, we introduce a visual entailment based contrastive learning method, explicitly modeling the entailment, contradiction, and neutral relationships between medical images and report sentences.
Experimental results demonstrate that integrating this new evaluation method provides a more comprehensive evaluation of vision-language pretraining models in the medical domain. Additionally, our model achieves state-of-the-art performance across various downstream tasks, highlighting the effectiveness of our approach.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9769
Loading