SnapSeg: Training-Free Few-Shot Medical Image Segmentation with Segment Anything Model

Published: 01 Jan 2024, Last Modified: 16 Apr 2025TAI4H 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the pursuit of advancing medical diagnosis, automatic segmentation of medical images is crucial, particularly in extending medical expertise to under-resourced regions. However, collecting and annotating medical data for deep learning frameworks are both time-consuming and expensive. Few-shot learning, which leverages limited labeled data to learn new tasks, has been widely applied to medical image segmentation, offering significant advancements. Nonetheless, these methods often rely on extensive unlabeled data to acquire prior medical knowledge. We introduce \({\textbf {SnapSeg}}\), a novel few-shot segmentation framework that stands out by requiring only a minimal set of labeled images to directly tackle new segmentation tasks, thus bypassing the need for a traditional training phase. Utilizing either a single or a few labeled examples, SnapSeg extracts multi-level features from the Segment Anything Model (SAM)’s image encoder and incorporates a relative anchor algorithm for precise spatial assessment. Our method demonstrates state-of-the-art performance on the widely-used Abd-CT dataset in medical image segmentation.
Loading