A 3D Unsupervised Domain Adaption Framework Combining Style Translation and Self-Training for Abdominal Organs Segmentation
Keywords: Abdominal organs segmentation, Unsupervised domain adaption, Style translation, Self-training.
TL;DR: We developed a 3D unsupervised domain adaptation framework that integrates style translation and self-training to enhance MR segmentation accuracy.
Abstract: Accurate segmentation of abdominal organs is crucial for the diagnosis and treatment of diseases. Thanks to the development of deep learning, the performance of CT abdominal organ segmentation has been qualitatively improved. However, due to the lack of labeled data for MR, it is challenging to utilize the existing CT data to achieve model adaptation on MR modality. Unsupervised domain adaptation has shown the potential to alleviate this challenge by learning from labeled source domain images as well as a large number of unlabeled target images. In this work, we first generate diverse fake MR data through a style translation network to assist in segmentation model training. Next, we follow a selftraining strategy to utilize the segmentation network after training with mixed style images, and apply strategies such as pseudo-label filtering and elastic registration to generate accurate pseudo-labels for the MR data. Finally, we adopt a two-stage framework to localize the region of interest and then perform fine segmentation on it, which further improves the performance and efficiency of segmentation. Experiments on the validation set of FLARE 2024 demonstrate that our method achieves excellent segmentation performance as well as fast and low-resource model inference. The average DSC and NSD scores are 79.42% and 86.46%, respectively, the average inference time is 2.81 s, and the maximum GPU memory is 4135 MB on validation set. The code is available at https://github.com/TJUQiangChen/FLARE24-task3.
Submission Number: 1
Loading