Keywords: Medical image segmentation, nnU-Net, weight pruning, sparsity
Abstract: nnU-Net is widely known for its accurate and robust segmentation performance in medical imaging tasks. However, the trained networks are typically heavily parameterized, and the high computational demand limit their deployment on devices with constrained resources. In this paper, we show that more than 80% of the trained nnU-Net weights can be removed without significant performance degradation, maintaining a proxy Dice score of >0.95. This applies to both 2D and 3D configurations across four different medical image segmentation datasets. Interestingly, we observe that critical weights consistently concentrate near the U-Net encoder and decoder ends, while the bottleneck layers can be heavily pruned. These findings highlight the significant weight redundancy in nnU-Net and suggest opportunities for further optimization, to facilitate deployment of the model on devices with limited resources.
Submission Number: 78
Loading