Abstract: Highlights • Segmentation deep neural network can be trained to satisfy anatomical constraints. • No modification of the architecture is required: no computational cost at inference. • The training can be done in a semi supervised way to benefit from non annotated data. • Training with constraints implies a large reduction of errors in Hausdorff distance. Abstract The advent of deep learning has pushed medical image analysis to new levels, rapidly replacing more traditional machine learning and computer vision pipelines. However segmenting and labelling anatomical regions remains challenging owing to appearance variations, imaging artifacts, the paucity and variability of annotated data, and the difficulty of fully exploiting domain constraints such as anatomical knowledge about inter-region relationships. We address the last point, improving the network’s region-labeling consistency by introducing NonAdjLoss, an adjacency-graph based auxiliary training loss that penalizes outputs containing regions with anatomically-incorrect adjacency relationships. NonAdjLoss supports both fully-supervised training and a semi-supervised extension in which it is applied to unlabeled supplementary training data. The approach substantially reduces segmentation anomalies on the MICCAI-2012, IBSRv2 brain MRI datasets and the Anatomy3 whole body CT dataset, especially when semi-supervised training is included.
0 Replies
Loading