Keywords: Abdominal organs segmentation, Unsupervised domain adap-tion, Style translation, Contrastive learning
Abstract: In recent years, deep learning–based multi-modal abdominal organ segmentation has played an increasingly important role in clinical diagnosis and treatment. However, the development across imaging modalities has been uneven: CT segmentation has achieved remarkable progress owing to large-scale, high-quality annotated datasets, while MRI and PET segmentation still suffers from data scarcity due to the lack of annotations. Achieving efficient cross-modality transfer from CT to MRI/PET under limited or no annotations remains a key challenge for advancing intelligent multi-modal abdominal imaging. To address this, we frame the problem as one of unsupervised cross-modality domain adaptation and propose a two-stage framework that jointly optimizes image generation and segmentation prediction. In the first stage, a generative network and a supervised segmentation network are combined to produce pseudo-labels for unlabeled MRI and PET scans using labeled CT samples. In the second stage, a simple yet effective pseudo-label selection strategy is applied to improve label reliability and model training. Experiments on Task 3 of the FLARE25 Challenge show that our method achieves average DSC and NSD scores of 78.66\% and 85.42\% on the MRI validation set, and 82.33\% and 73.54\% on the PET validation set. The per-case runtime and GPU memory usage are 8.87 s and 5012.87 MB for MRI, and 8.49 s and 4672.31 MB for PET. The proposed method reduces cross-modality domain gaps while significantly lowering training resource consumption. Our code is available at \url{https://github.com/wenzizzz/Flare25Task3}.
Submission Number: 10
Loading