Feature Refinement from Multiple Perspectives for High Performance Salient Object Detection

Published: 01 Jan 2023, Last Modified: 07 Mar 2025PRCV (12) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, deep-learning based salient object detection methods have gained great progress. However, there still exists some problems such as inefficient multi-level feature fusion, unstable multi-scale context-aware feature extraction, detail loss caused by upsampling and unbalanced distribution. To efficiently fuse multi-level features, we propose an attention-guided bi-directional feature refinement module (ABFRM) including top-down and bottom-up processes, which applies different attention-based feature fusion strategies for different directional processes. To obtain stable multi-scale contextual features, we design a serial atrous fusion module (SAFM), which uses serial atrous convolutional layers with small dilation rates. To reduce detail loss caused by upsampling with a large factor, we devise an upsampling feature refinement module (UFRM), which utilizes the combination of deconvolution and bilinear interpolation. To address unbalanced distribution from both foreground and background perspectives, we propose a novel hybrid loss, which contains Intersection-over-Union (IoU) and background boundary (BGB) losses. Comprehensive experiments on five benchmark datasets demonstrate that our proposed method outperforms 13 state-of-the-art approaches under four evaluation metrics. The code is available at https://github.com/xuanli01/PRCV210.
Loading