Dual Guidance Enhancing Camouflaged Object Detection via Focusing Boundary and Localization Representation

Published: 01 Jan 2024, Last Modified: 12 Apr 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Camouflaged object detection (COD) aims to segment objects that blend into their surrounding environment. However, low-level features in the shallow layers of neural networks, although rich in edge information, often contain a significant amount of redundant information, making it difficult to represent boundary details accurately. On the other hand, deep high-level features retain semantic information for object localization, but the gradual decrease in resolution can introduce biases in representing localization information. To address this issue, we propose a novel boundary and localization representation network (BLR-Net) that guides high-level features to focus on representing localization information while directing low-level features to emphasize boundary details. Firstly, we propose a multi-scale enhanced feature module (MEFM) to capture multi-scale information from backbone features and obtain aggregated feature representations. Next, we propose an extraction boundary module (EBM) that models object boundary features, providing essential boundary information. Subsequently, we introduce a guided learning module (GLM) that utilizes localization features to guide high-level features toward localization representation learning and boundary features to guide low-level features toward boundary representation learning. Finally, we propose a cross-level feature fusion module (CFFM) that aggregates contextual semantic information and gradually fuses multi-level fusion features from the bottom to the top to predict camouflaged objects. Extensive experiments on four benchmark COD datasets demonstrate that BLR-Net outperforms other state-of-the-art COD models.
Loading