Keywords: Unsupervised Medical Image Segmentation;
TL;DR: To this end, for the first time, we propose a novel Multi-prompts Unified Cross-domain Segmentation framework (MUCS) for unsupervised cross-domain medical image segmentation.
Abstract: Unsupervised cross-domain segmentation addresses the challenge of label dependence in cross-domain medical image segmentation. Yet, most existing methods treat domain adaptation and segmentation as \textbf{\textit{Two Separate Steps}} and primarily focus on global domain adaptation, lacking the ability to prioritize segmentation-specific information during domain adaptation. Additionally, extracting domain-invariant feature representation remains an unavoidable challenge for cross-domain segmentation. These challenges significantly reduce segmentation performance. To this end, we propose a novel Unified Uncertain Dual-prompts cross-domain Segmentation framework (UUDS) for unsupervised cross-domain medical image segmentation. Specifically, our UUDS forms a unified framework by integrating domain adaptation and segmentation models, facilitating interaction between the two tasks, and addressing the challenge of emphasizing segmentation semantics while domain adaptation. Additionally, UUDS creatively uses dual-prompts, domain and segmentation prompts, to learn domain-invariant feature representation, ensuring that model can learn domain-invariant feature representation from cross-domain space. Furthermore, to facilitate interaction between the two tasks, UUDS uses uncertainty estimation to dynamically compute the label of segmentation for directly supervising the cross-domain adaptation, making the semantic information from unlabeled target images can directly supervise the process of domain adaptation and keeping the model sensitive to segmentation. Extensive experimental results on two public unsupervised cross-modality medical image segmentation demonstrate that UUDS outperforms state-of-the-art methods in unsupervised cross-modality medical image segmentation, highlighting its effectiveness in addressing domain shifts and marking a significant breakthrough.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8690
Loading