- Abstract: Multi-domain data are widely leveraged in vision applications to take advantage of complementary information from each modality, e.g., brain tumor segmentation from multi-parametric magnetic resonance imaging (MRI). However, due to different imaging protocol and data loss or corruption, the availability of images in all domains could vary amongst multiple data sources in practice, which makes it challenging to train and test a universal model with a varied set of input data. To tackle this problem, we propose a general approach to complete the possible missing domain of the input data in a variety of application settings. Specifically, we develop a novel generative adversarial network (GAN) architecture that utilizes a representational disentanglement scheme for shared `"skeleton" encoding and separate `"flesh" encoding across multiple domains. We further illustrate that the learned representation in the multi-domain image translation could be leveraged for higher-level recognition, like segmentation. Specifically, we introduce a unified framework of image completion branch and segmentation branch with a shared content encoder. We demonstrate constant and significant performance improvement by integrating the proposed representation disentanglement scheme in both multi-domain image completion and image segmentation tasks using three evaluation datasets individually for brain tumor segmentation, prostate segmentation, and facial expression image completion.
- Original Pdf: pdf