Exploring Deeper! Segment Anything Model with Depth Perception for Camouflaged Object Detection

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper introduces a new Segment Anything Model with Depth Perception (DSAM) for Camouflaged Object Detection (COD). DSAM exploits the zero-shot capability of SAM to realize precise segmentation in the RGB-D domain. It consists of the Prompt-Deeper Module and the Finer Module. The Prompt-Deeper Module utilizes knowledge distillation and the Bias Correction Module to achieve the interaction between RGB features and depth features, especially using depth features to correct erroneous parts in RGB features. Then, the interacted features are combined with the box prompt in SAM to create a prompt with depth perception. The Finer Module explores the possibility of accurately segmenting highly camouflaged targets from a depth perspective. It uncovers depth cues in areas missed by SAM through mask reversion, self-filtering, and self-attention operations, compensating for its defects in the COD domain. DSAM represents the first step towards the SAM-based RGB-D COD model. It maximizes the utilization of depth features while synergizing with RGB features to achieve multimodal complementarity, thereby overcoming the segmentation limitations of SAM and improving its accuracy in COD. Experimental results on COD benchmarks demonstrate that DSAM achieves excellent segmentation performance and reaches the state-of-the-art (SOTA) on COD benchmarks with less consumption of training resources.
Primary Subject Area: [Content] Vision and Language
Relevance To Conference: We propose DSAM, a variant of SAM with depth information tailored for the COD domain. This model explores the interaction between two modalities, depth and RGB, within the context of the SAM framework in the COD domain, where such interaction serves a complementary role between the modalities. DSAM includes PDM and FM. PDM achieves mutual complementation of two modalities by interacting through deep features and RGB features, resulting in a novel prompt endowed with depth perception capabilities. By mining the depth cues of the overlooked segments in SAM segmentation, FM compensates for SAM's original prediction results, thereby improving accuracy. In summary, DSAM utilizes multimodal information and advances the development of SAM in the field of COD.
Submission Number: 2656
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview