Adaptive Edge-Aware Semantic Interaction Network for Salient Object Detection in Optical Remote Sensing Images

Abstract: In recent years, the task of salient object detection in optical remote sensing images (RSI-SOD) has received extensive attention. Benefiting from the development of deep learning, much progress has been made in RSI-SOD field. However, existing methods still face challenges in addressing various issues present in optical RSI, including uncertain numbers of salient objects, cluttered backgrounds, and interference from shadows. To address these challenges, we propose a novel approach, adaptive edge-aware semantic interaction network (AESINet) for efficient salient object detection (SOD). Specifically, to improve the extraction of complex edge information, we design a local detail aggregation module (LDAM). This module can adaptively enhance the edge information of salient objects by leveraging our proposed difference perception mechanism. Notably, our difference perception mechanism is a novel edge enhancement method without the supervision of edge ground truth. Additionally, to accurately locate salient objects of varying numbers and scales, we design a multiscale feature extraction module (MFEM), which effectively captures and utilizes multiscale information. Moreover, we design the deep semantic interaction module (DSIM) to identify salient objects amidst cluttered backgrounds and effectively mitigate the interference of shadows. We conduct extensive experiments on three well-established optical RSI datasets and the results demonstrate that our proposed model outperforms 14 state-of-the-art methods. All codes and detection results are available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/xumingzhu989/AESINet-TGRS</uri> .
0 Replies
Loading