Rethinking nnU-Net for Cross-Modality Unsupervised Domain Adaptation in Abdominal Organ Segmentation

Published: 31 Mar 2025, Last Modified: 31 Mar 2025FLARE 2024 withMinorRevisionsEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cross-modality, Unsupervised domain adaptation, Abdominal organ segmentation
Abstract: Research on abdominal organ segmentation has been extensive for computed tomography (CT) scans but limited for magnetic resonance (MR) scans due to the scarcity of annotated MR data. This challenge highlights the need for effective cross-modality unsupervised domain adaptation (UDA) techniques to leverage annotated CT scans for improving MR scan segmentation. While nnU-Net is recognized as a robust baseline for medical image segmentation, its application in UDA has been underexplored. In this paper, we propose a novel approach that rethinks nnU-Net as a tool to enhance UDA methods for abdominal organ segmentation in MR scans. We introduce a three-stage pipeline to address this challenge. In the first stage, we develop an nnU-Net-based UDA framework with a triple-level alignment strategy to facilitate knowledge transfer from CT scans to MR scans. In the second stage, we use the nnU-Net trained in the first stage to generate pseudo labels for MR scans. We then fine-tune this model with both labeled CT scans and MR scans with pseudo labels, and additionally train a separate nnU-Net from scratch using the pseudo-labeled MR scans. In the third stage, we address resource constraints by training a lightweight nnU-Net with selected unlabeled MR scans and their corresponding pseudo labels. We evaluate our approach on Task 3 of the FLARE2024 challenge, where the lightweight nnU-Net achieves a mean Dice Similarity Coefficient (DSC) of 75.37 and a mean Normalized Surface Dice (NSD) of 81.67 on the validation set. Our code is publicly available at https://github.com/Chen-Ziyang/FLARE2024-Task3.
Submission Number: 3
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview