Abstract: Transformers have achieved remarkable success across multiple fields, yet their impact on 3D medical image segmentation remains limited with convolutional networks still dominating major benchmarks. In this work, (A) we analyze current Transformer-based segmentation models and identify critical shortcomings, particularly their over-reliance on convolutional blocks. Further, we demonstrate that in some architectures, performance is unaffected by the absence of the Transformer, thereby demonstrating their limited effectiveness. To address these challenges, we move away from hybrid architectures and (B) introduce Transformer-centric segmentation architectures, termed Primus and PrimusV2. Primus leverages high-resolution tokens, combined with advances in positional embeddings and block design, to maximally leverage its Transformer blocks, while PrimusV2 expands on this through an iterative patch embedding. Through these adaptations, Primus surpasses current Transformer-based methods and competes with a default nnU-Net while PrimusV2 exceeds it and is on par with the state-of-the-art CNNs such as ResEnc-L and MedNeXt architectures across nine public datasets. In doing so, we introduce the first competitive Transformer-centric model, making Transformers state-of-the-art in 3D medical segmentation. Code is made available.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: - Removed highlighting of changes in "blue" to black
- Added Hyperlink to code repository
- Fixed minor typos/grammar issues
- Added Authors with Affiliations and Acknowledgments
Code: https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/primus.md
Assigned Action Editor: ~Lei_Wang13
Submission Number: 7199
Loading