Dual Encoder-Decoder Shifted Window-Based Transformer Network for Polyp Segmentation With Self-Learning Approach
Abstract: According to WHO reports, cancer is the leading cause of death worldwide. The second most prevalent cause of cancer-related death in both men and women is colorectal cancer (CRC). One potential approach for reducing the severity of colon cancer is to utilize automatic segmentation and detection of colorectal polyps in colonoscopy videos. This technology can assist endoscopists in quickly identifying colorectal disease, leading to earlier intervention and better patient Quality of Life (QoL). In this article, we propose a self-supervised transformer based dual encoder–decoder architecture named P-SwinNet for polyps segmentation in colonoscopy images. The P-SwinNet adapts the dual encoder–decoder type of model to enhance the feature maps by sharing multiscale information from the encoder to the decoder. The proposed model uses multiple dilated convolutions to enlarge the field of view to gather more information without increasing the computational cost and the loss of spatial information. We also leverage a large-scale unlabeled dataset for training our model using the self-learning strategy of Barlow twins. Additionally, to capture the long-range dependencies in the data, we used a shift window-based approach that computes global attention. We extensively evaluate our model against state-of-the-art algorithms. The quantitative results show that the proposed P-SwinNet achieves a mean dice score of 0.87 and a mean Intersection over Union (IoU) of 0.82 on five datasets used in our study. This performance demonstrates a substantial advancement over existing similar works, highlighting the advantage and novelty of our proposed approach in the field of medical image segmentation.
Loading