Siamese Network for RGB-D Salient Object Detection and BeyondDownload PDFOpen Website

2022 (modified: 04 Nov 2022)IEEE Trans. Pattern Anal. Mach. Intell. 2022Readers: Everyone
Abstract: Existing RGB-D salient object detection (SOD) models usually treat RGB and depth as independent information and design separate networks for feature extraction from each. Such schemes can easily be constrained by a limited amount of training data or over-reliance on an elaborately designed training process. Inspired by the observation that RGB and depth modalities actually present certain commonality in distinguishing salient objects, a novel joint learning and densely cooperative fusion ( <italic/> JL-DCF <italic/> ) architecture is designed to learn from both RGB and depth inputs through a shared network backbone, known as the <i>Siamese architecture</i> . In this paper, we propose two effective components: joint learning (JL), and densely cooperative fusion (DCF). The JL module provides robust saliency feature learning by exploiting cross-modal commonality via a Siamese network, while the DCF module is introduced for complementary feature discovery. Comprehensive experiments using five popular metrics show that the designed framework yields a robust RGB-D saliency detector with good generalization. As a result, JL-DCF significantly advances the state-of-the-art models by an average of <inline-formula><tex-math notation="LaTeX">$\sim 2.0\%$</tex-math></inline-formula> (max F-measure) across seven challenging datasets. In addition, we show that <italic/> JL-DCF <italic/> is readily applicable to other related multi-modal detection tasks, including RGB-T (thermal infrared) SOD and video SOD, achieving comparable or even better performance against state-of-the-art methods. We also link <italic/> JL-DCF <italic/> to the RGB-D semantic segmentation field, showing its capability of outperforming several semantic segmentation models on the task of RGB-D SOD. These facts further confirm that the proposed framework could offer a potential solution for various applications and provide more insight into the cross-modal complementarity task.
0 Replies

Loading