Bridge the Gap of Semantic Context: A Boundary-Guided Context Fusion UNet for Medical Image Segmentation
Abstract: Accurate medical image segmentation of lesion areas is crucial in assisting diagnostics and treatment planning of diseases. In this paper, we propose a boundary-guided context fusion U-Net(BCF-UNet) for medical image segmentation to address a significant problem with existing approaches: ignoring the relationships between the boundary information and semantic context. In the BCF-UNet, we propose a novel feature fusion module, named Semantic Context-based Adaptive Fusion Module (SCAFM), to incorporate the different levels of encoder outputs and transfer them to the decoder. The SCAFM allows our network to fuse the deepest semantic information with the shallow edge information to greatly bridge the gap between different scale features. Then, in each layer of the encoder sub-network, we propose a newly pyramid edge enhancement module with edge-related prior knowledge to obtain the edge multi-granularity information, which enhances the accuracy of the segmentation boundary. In the BCF-UNet, edge information is peculiarly utilized to enhance the capability of semantic context extraction, which helps the model better understand and recognize the relationship between different tissue structures. The result of extensive experiments on two datasets shows that the proposed BCF-UNet outperforms state-of-art techniques.
Loading