Keywords: Medical image segmentation, Segment anything model
Abstract: Accurate medical image segmentation is crucial for clinical applications but remains challenging due to ambiguous boundaries, multi-scale anatomies, and the high cost of expert annotations. While deep learning models often produce coarse initial masks, enhancing them into clinically reliable outputs is a critical yet under-explored problem. We propose SAMedEnhancer, a generic medical image segmentation enhancement framework that enhances coarse masks from any segmentation model using a strategically adapted Segment Anything Model (SAM). Our key innovation is a morphology-aware prompt generation strategy. It first analyzes initial masks via connected-component and shape analysis to identify reliable anatomical regions. Then, a hierarchical prompting mechanism is devised: positive points are sampled from high-confidence interiors, while negative points are selected from informative nearby backgrounds within dilated regions; these are supplemented by bounding boxes enclosing the refined targets. This coarse-to-fine prompting robustly guides SAM to recover accurate boundaries, resisting error propagation from imperfect inputs. We extensively validate SAMedEnhancer on a comprehensive benchmark for medical image segmentation enhancement, encompassing several datasets across various imaging modalities and both fully- and semi-supervised settings. Results demonstrate that our method consistently improves segmentation quality from state-of-the-art segmenters, reduces annotation dependency, and serves as a versatile accelerator for medical image segmentation.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 6118
Loading