Depth quality-aware selective saliency fusion for RGB-D image salient object detectionOpen Website

2021 (modified: 05 Mar 2025)Neurocomputing 2021Readers: Everyone
Abstract: Previous RGB-D salient object detection (SOD) methods have widely adopted the deep learning tools to automatically strike a trade-off between RGB and depth (D). The key rationale is to take full advantage of the complementary nature between RGB and D, aiming for a much-improved SOD performance than that of using either of them solely. However, because to the D quality itself usually varies from scene to scene, such fully automatic fusion schemes may not always be helpful for the SOD task. Moreover, as an objective factor, the D quality has long been overlooked by previous work. Thus, this paper proposes a simple yet effective scheme to measure D quality in advance. The key idea is to devise a series of features in accordance with the common attributes of the high-quality D regions. To be more concrete, we advocate to conduct D quality assessments following a multi-scale methodology, which includes low-level edge consistency, mid-level regional uncertainty and high-level model variance. All these components will be computed independently and later be combined with RGB and D saliency cues to guide the selective RGBD fusion. Compared with the SOTA fusion schemes, our method can achieve better fusion result between RGB and D. Specifically, the proposed D quality measurement method is able to achieve steady performance improvements for almost 2.0% averagely.
0 Replies

Loading