Part-aware Prompted Segment Anything Model for Adaptive Segmentation

TMLR Paper4143 Authors

04 Feb 2025 (modified: 22 Apr 2025)Decision pending for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Precision medicine, such as patient-adaptive treatments assisted by medical image analysis, poses new challenges for image segmentation algorithms due to the large variability across different patients and the limited availability of annotated data for each patient. In this work, we propose a data-efficient segmentation method to address these challenges, namely $\textit{\textbf{P}art-aware}$ $\textit{\textbf{P}rompted}$ $\textit{\textbf{S}egment}$ $\textit{\textbf{A}nything}$ $\textit{\textbf{M}odel}$ ($\mathbf{{P}^{2}SAM}$). Without any model fine-tuning, $\text{P}^2\text{SAM}$ enables seamless adaptation to any new patients relying only on one-shot patient-specific data. We introduce a novel part-aware prompt mechanism to select multiple-point prompts based on part-level features of the one-shot data, which can be extensively integrated into different promptable segmentation models, such as SAM and SAM 2. To further promote the robustness of the part-aware prompt mechanism, we propose a distribution-guided retrieval approach to determine the optimal number of part-level features for a specific case. $\text{P}^2\text{SAM}$ improves the performance by $\texttt{+} 8.0\%$ and $\texttt{+} 2.0\%$ mean Dice score for two different patient-adaptive segmentation applications, respectively. In addition, $\text{P}^2\text{SAM}$ also exhibits impressive generalizability in other adaptive segmentation tasks in the natural image domain, $\textit{e.g.}$, $\texttt{+} 6.4\%$ mIoU within personalized object segmentation task. Code will be released upon acceptance.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jianbo_Jiao2
Submission Number: 4143
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview