How few annotations are needed for segmentation using a multi-planar U-Net?Download PDF

10 Feb 2021 (modified: 16 May 2023)Submitted to MIDL 2021Readers: Everyone
Keywords: 3D imaging, segmentation, deep learning, U-Net, sparse annotations
Abstract: U-Net architectures are an extremely powerful tool for segmenting 3D volumes, and the recently proposed multi-planar U-Net has reduced the computational requirement for using the U-Net architecture on three-dimensional isotropic data to a subset of two-dimensional planes. Despite this considerable reduction in model-parameters and training data needed, providing the required manually annotated data can still be a daunting task. In this article, we investigate the multi-planar U-Net's ability to learn three-dimensional structures in isotropic data from sparsely annotated training samples. Technically, we pick random training planes intersecting the three-dimensional image and sparsely annotate the pixels along random lines in each of these planes. We present our empirical findings on a public domain, electron microscopy data set, which has been fully annotated by an expert, and surprisingly we find that the multi-planar U-Net with our random annotation strategy on average requires less than 30% of the annotations. Sometimes less is more!
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: methodological development
Primary Subject Area: Segmentation
Secondary Subject Area: Learning with Noisy Labels and Limited Data
20 Replies

Loading