Boundary-Guided Fusion of Multi-Level Features Network for Camouflaged Object Detection

Published: 01 Jan 2024, Last Modified: 12 Apr 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Camouflaged objects, exhibiting high similarity with their surroundings, pose a substantial challenge for both humans and machines to detect when concealed within the environment. Existing methods for camouflage object detection (COD) struggle in accurately segmenting the overall structure of camouflaged objects. To address this issue, we propose a novel boundary-guided fusion of multi-level features network (BGFM-Net) for COD. In contrast to existing boundary-guided methods, we pay more attention to addressing the significant imbalance in the pixel quantities between boundary and background features, allowing for a more comprehensive representation of boundary features. BGFM-Net primarily consists of a multi-scale aggregation module (MSAM), a boundary-guided feature module (BFM), and a cross-Level fusion module (CLFM). MSAM effectively integrates contextual semantics at different scales, achieving a powerful and efficient feature representation. BFM adeptly combines edge features while constraining interference from background features, guiding the learning of camouflaged object boundary representation. CLFM integrates multi-level features for predicting camouflaged objects while adaptively adjusting channel weights to emphasize important channels and diminish the impact of less relevant channels for the task. Extensive experiments on three benchmark camouflage datasets demonstrate that our BGFM-Net outperforms other state-of-the-art COD models.
Loading