Unsupervised Multi-Modal Medical Image Registration via Discriminator-Free Image-to-Image Translation
Abstract: In clinical practice, well-aligned multi-modal images, such as MagneticResonance (MR) and Computed Tomography (CT), together can provide complementaryinformation for image-guided therapies. Multi-modal image registration isessential for the accurate alignment of these multi-modal images. However, itremains a very challenging task due to complicated and unknown spatialcorrespondence between different modalities. In this paper, we propose a noveltranslation-based unsupervised deformable image registration approach toconvert the multi-modal registration problem to a mono-modal one. Specifically,our approach incorporates a discriminator-free translation network tofacilitate the training of the registration network and a patchwise contrastiveloss to encourage the translation network to preserve object shapes.Furthermore, we propose to replace an adversarial loss, that is widely used inprevious multi-modal image registration methods, with a pixel loss in order tointegrate the output of translation into the target modality. This leads to anunsupervised method requiring no ground-truth deformation or pairs of alignedimages for training. We evaluate four variants of our approach on the publicLearn2Reg 2021 datasets. The experimental resultsdemonstrate that the proposed architecture achieves state-of-the-artperformance.
0 Replies
Loading