Deep Multi-Class Segmentation Without Ground-Truth Labels

Thomas Joyce, Agisilaos Chartsias, Sotirios A. Tsaftaris

Apr 11, 2018 (modified: Apr 13, 2018) MIDL 2018 Conference Submission readers: everyone
  • Abstract: In this paper we demonstrate that through the use of adversarial training and additional unsupervised costs it is possible to train a multi-class anatomical segmentation algorithm without any ground-truth labels for the data set to be segmented. Specifically, using labels from a different data set of the same anatomy (although potentially in a different modality) we train a model to synthesise realistic multi-channel label masks from input cardiac images in both CT and MRI, through adversarial learning. However, as is to be expected, generating realistic mask images is not, on its own, sufficient for the segmentation task: the model can use the input image as a source of noise and synthesise highly realistic segmentation masks that do no necessarily correspond spatially to the input. To overcome this, we introduce additional unsupervised costs, and demonstrate that these provide sufficient further guidance to produce good segmentation results. We test our proposed method on both CT and MR data from the multi-modal whole heart segmentation challenge (MM-WHS) [1], and show the effect of our unsupervised costs on improving the segmentation results, in comparison to a variant without them. [1] MICCAI 2017 MM-WHS Challenge. http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs17/. Accessed: 2018-04-11.
  • Author affiliation: University of Edinburgh
  • Keywords: unsupervised, segmentation, cardiac, adversarial
0 Replies

Loading