Continuous U-Net: Faster, Greater and Noiseless

Published: 29 Apr 2024, Last Modified: 29 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Image segmentation is a fundamental task in image analysis and clinical practice. The current state-of-the-art techniques are based on U-shape type encoder-decoder networks with skip connections called U-Net. Despite the powerful performance reported by existing U-Net type networks, they suffer from several major limitations. These issues include the hard coding of the receptive field size, compromising the performance and computational cost, as well as the fact that they do not account for inherent noise in the data. They have problems associated with discrete layers, and do not offer any theoretical underpinning. In this work we introduce continuous U-Net, a novel family of networks for image segmentation. Firstly, continuous U-Net is a continuous deep neural network that introduces new dynamic blocks modelled by second order ordinary differential equations. Secondly, we provide theoretical guarantees for our network demonstrating faster convergence, higher robustness and less sensitivity to noise. Thirdly, we derive qualitative measures to tailor-made segmentation tasks. We demonstrate, through extensive numerical and visual results, that our model outperforms existing U-Net blocks for several medical image segmentation benchmarking datasets.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We thank to the AC and reviewers for the insighful feedback. We would like to address the question in the minor revision as follows. The variations observed in the Dice scores across different runs can be attributed to the inherent nature of the models' initialisation. In our experiments, the weights were randomly initialised before training. This practice, while standard, can lead to slight variations in the performance metrics, such as the Dice scores, even under identical training conditions (our results after first rebuttal period). It is a common aspect of training deep learning models and is indicative of the stochastic nature of the training process. . We are grateful for the opportunity to clarify this aspect of our work and thank you for your constructive feedback.
Assigned Action Editor: ~Hongsheng_Li3
Submission Number: 1871
Loading