Multimodal Image Registration Guided by Few Segmentations from One Modality

Published: 06 Jun 2024, Last Modified: 06 Jun 2024MIDL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multimodal image registration, few-shot learning
Abstract: Registration of multimodal images is challenging, especially when dealing with different anatomical structures and samples without segmentations. The main difficulty arises from the use of registration loss functions that are inadequate in the absence of corresponding regions. In this work, we present the first registration and segmentation approach tailored to this challenge. In particular, we assume the practically highly relevant scenario that only a limited number of segmentations are available for one modality and none for the other. First, we augment our few segmented samples using unsupervised deep registration within one modality, thereby providing many anatomically plausible samples to train a segmentation network. The resulting segmentation network then allows us to train a segmentation network on the target modality without available segmentations by using an unsupervised domain adaptation architecture. Finally, we train a deep registration network to register multimodal image pairs purely based on predictions of their segmentation networks. Our work demonstrates that using a small number of segmentations from one modality enables training a segmentation network on a target modality without the need for additional manual segmentations on that modality. Additionally, we show that registration based on these segmentations provides smooth and accurate deformation fields on anatomically different image pairs, unlike previous methods. We evaluate our approach on 2D medical image segmentation and registration between knee DXA and X-ray images. Our experiments show that our approach outperforms existing methods. Code is available at https://github.com/uncbiag/SegGuidedMMReg.
Latex Code: zip
Copyright Form: pdf
Submission Number: 169
Loading