Densely Connected Swin-UNet for Multiscale Information Aggregation in Medical Image Segmentation

Published: 01 Jan 2023, Last Modified: 14 Nov 2024ICIP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Image semantic segmentation is a dense prediction task in computer vision that is dominated by deep learning techniques in recent years. UNet, which is a symmetric encoder-decoder end-to-end Convolutional Neural Network (CNN) with skip connections, has shown promising performance. Aiming to process the multiscale feature information efficiently, we propose a new Densely Connected Swin-UNet (DCS-UNet) with multiscale information aggregation for medical image segmentation. Firstly, inspired by Swin-Transformer to model long-range dependencies via shift-window-based self-attention, this work proposes the use of fully ViT-based network blocks with a shift-window approach, resulting in a purely self-attention-based U-shape segmentation network. The relevant layers including feature sampling and image tokenization are re-designed to align with the ViT fashion. Secondly, a full-scale deep supervision scheme is developed to process the aggregated feature map with various resolutions generated by different levels of decoders. Thirdly, dense skip connections are proposed that allow the semantic feature information to be thoroughly transferred from different levels of encoders to lower level decoders. Our proposed method is validated on a public benchmark MRI Cardiac segmentation data set with comprehensive validation metrics showing competitive performance against other variant encoder-decoder networks. The code is available at https://github.com/ziyangwang007/VIT4UNet.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview