Unsupervised learning of multimodal image registration using domain adaptation with projected Earth Mover’s discrepanciesDownload PDF

Jan 25, 2020 (edited Jun 24, 2020)MIDL 2020 Conference Blind SubmissionReaders: Everyone
  • Abstract: Multimodal image registration is a very challenging problem for deep learning approaches. Most current work focuses on either supervised learning that requires labelled training scans and may yield models that bias towards annotated structures or unsupervised approaches that are based on hand-crafted similarity metrics and may therefore not outperform their classical non-trained counterparts. We believe that unsupervised domain adaptation can be beneficial in overcoming the current limitations for multimodal registration, where good metrics are hard to define. Domain adaptation has so far been mainly limited to classification problems. We propose the first use of unsupervised domain adaptation for discrete multimodal registration. Based on a source domain for which quantised displacement labels are available as supervision, we transfer the output distribution of the network to better resemble the target domain (other modality) using classifier discrepancies. To improve upon the sliced Wasserstein metric for 2D histograms, we present a novel approximation that projects predictions into 1D and computes the L1 distance of their cumulative sums. Our proof-of-concept demonstrates the applicability of domain transfer from mono- to multimodal (multi-contrast) 2D registration of canine MRI scans and improves the registration accuracy from 33% (using sliced Wasserstein) to 44%.
  • Paper Type: methodological development
  • Track: short paper
  • TL;DR: a new Earth Mover's distance was used for classifier discrepancy domain adaptation for multimodal registration
  • Presentation Upload: zip
  • Presentation Upload Agreement: I agree that my presentation material (videos and slides) will be made public.
6 Replies