Abstract: Most current medical image translation Generative Adversarial Networks (GANs) can only generate one definitive output for each input. However, in reality, there can be many simultaneously valid translations. For example, when synthesising medical images from segmentation, diverse tissue structures, contrasts, textures, and even modalities are possible based on the same segmentation. In this work, we propose Manifold Disentanglement Generative Adversarial Network (MDGAN), a style-based network capable of capturing this output diversity. The mechanism enabling output diversity is a style-based manifold, which is learnt from image data, and can be sampled to “stylise” the input into diverse outputs. We train MDGAN for segmentation-to-MR-and-CT translation and show that the manifold i) can learn distinct clusters that control the output modality (CT or MR), ii) can be traversed to smoothly alter features within each modality (such as tissue structures, contrasts in MR) and iii) is disentangled such that the input’s anatomical structures are faithfully preserved when generating diverse images based on the same segmentation map.
Loading