Keywords: deep 3D convolutional nets, whole brain segmentation, deep ensemble
Abstract: Segmentation of 3D volumes with a large number of labels, small convoluted structures, and lack of contrast between various structural boundaries is a difficult task. While recent methodological advances across many segmentation tasks are dominated by 3D architectures, currently the strongest performing method for whole brain segmentation is FastSurferCNN, a 2.5D approach. To shed light on the nuanced differences between 2.5D and various 3D approaches, we perform a thorough and fair comparison and suggest a spatially-ensembled 3D architecture. Interestingly, we observe training memory intensive 3D segmentation on full-view images does not outperform the 2.5D approach. A shift to training on patches even while evaluating on full-view solves these limitations of both memory and performance limitations at the same time. We demonstrate significant performance improvements over state-of-the-art 3D methods on both Dice Similarity Coefficient and especially average Hausdorff Distance measures across five datasets. Finally, our validation across variations of neurodegenerative disease states and scanner manufacturers, shows we outperform the previously leading 2.5D approach FastSurferCNN demonstrating robust segmentation performance in realistic settings.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: both
Primary Subject Area: Segmentation
Secondary Subject Area: Application: Radiology
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Code And Data: Code: https://github.com/Deep-MI/3d-neuro-seg Model files: http://doi.org/10.34730/67dfccf54c75492388f038128aa4c687 Data: We exclusively use publicly available datasets for our training. While we are not allowed to redistribute the datasets, the training / validation / test datasets may be reproduced following the splits provided on github, and reference segmentations with the FreeSurfer pipeline as extensively explained in the paper (Section 2.4 and Table 1 here) and in the FastSurfer paper (https://doi.org/10.1016/j.neuroimage.2020.117012, Section 2.1 and Appendix there) and code https://github.com/Deep-MI/FastSurfer .