Unsupervised Domain Adaptation through Shape Modeling for Medical Image SegmentationDownload PDF

Published: 28 Feb 2022, Last Modified: 16 May 2023MIDL 2022Readers: Everyone
Keywords: VAE, Medical Image Segmentation, Unsupervised Domain Adaptation
Abstract: Shape information is a strong and valuable prior in segmenting organs in medical images. However, most current deep learning based segmentation algorithms have not taken shape information into consideration, which can lead to bias towards texture. We aim at modeling shape explicitly and using it to help medical image segmentation. Previous methods proposed Variational Autoencoder (VAE) based models to learn the distribution of shape for a particular organ and used it to automatically evaluate the quality of a segmentation prediction by fitting it into the learned shape distribution. Based on which we aim at incorporating VAE into current segmentation pipelines. Specifically, we propose a new unsupervised domain adaptation pipeline based on a pseudo loss and a VAE reconstruction loss under a teacher-student learning paradigm. Both losses are optimized simultaneously and, in return, boost the segmentation task performance. Extensive experiments on three public Pancreas segmentation datasets as well as two in-house Pancreas segmentation datasets show consistent improvements with at least 2.8 points gain in the Dice score, demonstrating the effectiveness of our method in challenging unsupervised domain adaptation scenarios for medical image segmentation. We hope this work will advance shape analysis and geometric learning in medical imaging.
Registration: I acknowledge that publication of this at MIDL and in the proceedings requires at least one of the authors to register and present the work during the conference.
Authorship: I confirm that I am the author of this work and that it has not been submitted to another publication before.
Paper Type: both
Primary Subject Area: Transfer Learning and Domain Adaptation
Secondary Subject Area: Segmentation
Confidentiality And Author Instructions: I read the call for papers and author instructions. I acknowledge that exceeding the page limit and/or altering the latex template can result in desk rejection.
Code And Data: The code can be found here. (https://github.com/yyNoBug/VAE_segmentation.git) Three public datasets used in our paper are as follows. 1. NIH Pancreas-CT Dataset (NIH) (https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT) 2. Medical Segmentation Decathlon (MSD) (http://medicaldecathlon.com/) 3. Synapse Dataset (https://www.synapse.org/#!Synapse:syn3193805/wiki/217789) Unfortunately we are unable to share the in-house dataset as it contains sensitive patient data collected from the Johns Hopkins Hospital.
4 Replies

Loading