Keywords: BiomedCLIP, CLIP, CLIPSeg, Image segmentation, Vision-language models
TL;DR: Ensembling CLIP-based Vision-Language Models with a low-complexity CNN for enhanced medical image segmentation performance
Abstract: Vision-language models and their adaptations to image segmentation tasks present enormous potential for producing highly accurate and interpretable results. However, implementations based on CLIP and BiomedCLIP are still lagging behind more sophisticated architectures such as CRIS. In this work, instead of focusing on text prompt engineering as is the norm, we attempt to narrow this gap by showing how to ensemble vision-language segmentation models (VLSMs) with a low-complexity CNN. By doing so, we achieve a significant performance average Dice gain of 6.3% with the ensembled BiomedCLIPSeg on the BKAI polyp dataset. Furthermore, we provide initial experimental results on the other four radiology and non-radiology datasets. We conclude that ensembling works differently across these datasets (from outperforming to underperforming the CRIS model), indicating a topic for future investigation by the community. The code will be released at
https://github.com/juliadietlmeier/VLSM-Ensemble.
Submission Number: 15
Loading