Abstract: Transformer model has received extensive attention in recent years. Its powerful ability to handle contextual relationships makes it outstanding in the accurate segmentation of medical structures such as organs and lesions. However, as the Transformer model becomes more complex, its computational overhead has also increased significantly, becoming one of the key factors limiting the performance improvement of the model. In addition, some existing methods use channel dimensionality reduction to model cross-channel relationships. Although this strategy effectively reduces the amount of computation, it may lead to information loss or poor performance in segmentation tasks on medical images with rich details. To address the above problems, we propose an innovative medical image segmentation model, PCMA Former. This model combines convolution with focused weight reparameterization and a channel multi-branch attention mechanism, aiming to effectively improve model performance while maintaining low computational overhead. Through experimental verification on multiple medical image datasets (such as Synapse, ISIC2017, and ISIC2018), PCMA Former has achieved better results than traditional convolutional neural networks and existing Transformer models.
Loading