Probabilistically Prompted SAMs Are Efficient Segmentator for Ambiguous Medical Images

Published: 01 Jan 2024, Last Modified: 09 Nov 2024ACM Multimedia 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Generating diverse plausible outputs from a single input is crucial for addressing visual ambiguities, exemplified in medical imaging where experts may provide varying semantic segmentation annotations for the same image.Existing methods handles ambiguous segmentation relying on probabilistic modeling and extensive multi-output annotated data while often struggles with limited ambiguously labeled datasets common in real-world applications.To surmount the challenge, we propose P²SAM, a novel framework that leverages the Segment Anything Model (SAM)'s prior knowledge for ambiguous object segmentation. By transforming SAM's sensitivity to prompts into an advantage, we introduce a prior probabilistic space for prompts.Experimental results show that P²SAM significantly enhances medical segmentation precision and diversity using minimal ambiguously annotated samples. Benchmarking against state-of-the-art methods demonstrates superior performance with just 5.5% of the training data (+12% Dmax). This approach marks a significant advancement towards deploying probabilistic models in data-limited real-world scenarios.
Loading