Abstract: The latest advancement in large foundational model, SAM2, has demonstrated significant potential in 3D medical image segmentation due to their capability to effectively segment video streams. However, its application in medical image segmentation presents challenges, requiring extensive training on medical images or high-quality prompts provided by experts to achieve optimal performance. To address the aforementioned limitations, we propose SAM2-SP, which adopts Low-Rank Adaption for parameter-efficient fine-tuning and introduces a novel dynamic self-prompting strategy that generates most confident prompt templates from voxel features, enabling SAM2 to achieve domain adaptation in medical image segmentation without reliance on expert-level prompts. Extensive experiments show that SAM2-SP achieves state-of-the-art performance on the public Synapse dataset and the private EDC dataset, and even outperforms the compared task-specific segmentation approaches, the vanilla SAM and other SAM-based approaches.
Loading