MedSAM-U: Uncertainty-Guided Auto Multi-Prompt Adaptation for Reliable MedSAM

Nan Zhou, Ke Zou, Kai Ren, Mengting Luo, Linchao He, Meng Wang, Yidi Chen, Yi Zhang, Hu Chen, Huazhu Fu

Published: 01 Jan 2025, Last Modified: 05 Nov 2025IEEE Transactions on Circuits and Systems for Video TechnologyEveryoneRevisionsCC BY-SA 4.0
Abstract: The Medical Segment Anything Model (MedSAM) has demonstrated strong performance in medical image segmentation, attracting increasing attention in the medical imaging domain. However, as with many prompt-based segmentation models, its performance is highly sensitive to the type and location of input prompts. This sensitivity often leads to suboptimal segmentation outcomes and necessitates labor-intensive manual prompt tuning, which hampers both efficiency and robustness. To address this challenge, this paper proposes MedSAM-U, an uncertainty-guided framework designed to automatically refine prompt inputs and enhance segmentation reliability. Specifically, a Multi-Prompt Adapter is integrated into MedSAM, resulting in MPA-MedSAM, which enables the model to effectively accommodate diverse multi-prompt inputs. An uncertainty estimation module is then introduced to evaluate the reliability of the prompts and their initial segmentation results. Based on this, a novel uncertainty-guided prompt adaptation strategy is applied to automatically generate refined prompts and more accurate segmentation outputs. The proposed MedSAM-U framework is evaluated across multiple medical imaging modalities. Experimental results on five diverse datasets demonstrate that MedSAM-U achieves consistent performance improvements ranging from 1.7% to 20.5% over the baseline MedSAM, confirming its effectiveness and practicality for robust and efficient medical image segmentation.
Loading