Abstract: Few-shot segmentation (FSS) has garnered significant attention. Many recent approaches attempt to introduce the segment anything model (SAM) to handle this task. With the strong generalization ability and rich object-specific extraction ability of the SAM model, such a solution shows great potential in FSS. However, the decoding process of SAM highly relies on accurate and explicit prompts, making previous approaches mainly focus on extracting prompts from the support set, which is insufficient to activate the generalization ability of SAM, and this design is easy to result in a biased decoding process when adapting to the unknown classes. In this work, we propose an unbiased semantic decoding (USD) strategy integrated with SAM, which extracts target information from both the support and query set simultaneously to perform consistent predictions guided by the semantics of the contrastive language-image pretraining (CLIP) model. Specifically, to enhance the unbiased semantic discrimination of SAM, we design two feature enhancement strategies that leverage the semantic alignment capability of CLIP to enrich the original SAM features, mainly including a global supplement at the image level to provide a generalize category indicate with support image and a local guidance at the pixel level to provide a useful target location with query image. Besides, to generate target-focused prompt embeddings, a learnable visual–text target prompt generator (VTPG) is proposed by interacting target text embeddings and clip visual features. Without requiring retraining of the vision foundation models, the features with semantic discrimination draw attention to the target region through the guidance of prompt with rich target information. Experiments on both the PASCAL- $5^{i}$ and COCO- $20^{i}$ show that our proposed method outperforms the existing approaches by a clear margin and achieves new state-of-the-art performances.
External IDs:dblp:journals/tnn/WangZPLLC26
Loading