Make nnUNets Small AgainDownload PDF

Published: 28 Apr 2023, Last Modified: 31 May 2023MIDL 2023 Short paper track PosterReaders: Everyone
Keywords: 3D semantic segmentation, model distillation, efficient convolutions
TL;DR: We show that applying 3×3×3 convolutions only partially on the channels + a specifically designed inverted bottleneck and re-parameterisation enables better balancing model sizes, training effort + computational burden of deep segmentation nets
Abstract: Automatic high-quality segmentations have become ubiquitous in numerous downstream tasks of medical image analysis, i.e. shape-based pathology classification or semantically guided image registration. Public frameworks for 3D U-Nets provide numerous pre-trained models for nearly all anatomies in CT scans. Yet, the great generalisation comes at the cost of very heavy networks with millions of parameter and trillions of floating point operations for every single model in even larger ensembles. We present a novel combination of two orthogonal approaches to lower the computational (and environmental) burden of U-Nets: namely partial convolution and structural re-parameterization that tackle the intertwined challenges while keeping real world latency small.
2 Replies