Sample-Adapt Fusion Network for RGB-D Hand Detection in the WildDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 06 Nov 2023ICASSP 2023Readers: Everyone
Abstract: RGB and depth modalities provide complementary information, which can be effectively utilized to improve the performance of hand detection in the wild. Most existing fusion-based methods model the channel-wise or spatial-wise cross-modal correlation to exploit the complementary RGB-D information, in which the modeling operations are shared across all input samples. However, the input images show various modes due to the high diversity of scenes in the wild. This inter-sample variance cannot be effectively perceived by static modeling operations shared across all samples. To address this problem, we propose a Sample-Adapt Fusion Network (SAFNet) with Channel Dynamic Refinement Module (CDRM) and Spatial Dynamic Aggregation Module (SDAM) to adaptively model the channel-wise and spatial-wise cross-modal correlation. Specifically, we propose a Multi-kernel Attention Module (MAM) to individually generate attention maps for each input sample by applying learnable weighting operations to multiple convolutional kernels. Our method outperforms state-of-the-art methods on CUG Hand dataset.
0 Replies

Loading