A Simple yet Effective Adaptive Inter-organ Contrastive Learning Framework for Unsupervised Domain Adaptation
Keywords: Unsupervised Domain Adaptation, Multi-organ Segmentation
Abstract: Recent unsupervised domain adaptation methods have shown promise in medical image segmentation dealing with cross-modal data, while their reliance on adversarial learning for global feature alignment often compromises fine-grained semantic consistency and degrades source domain performance. Although most contrastive learning (CL) methods present excellent ability for semantic representation, they extract features with a pixel-to-pixel binary thresholding strategy, which might lead to sparse feature space and bad spatial alignment. Thus, we present a CL method leveraging pseudo-label for organ-wise feature patch sampling. The proposed simple yet effective adaptive inter-organ contrastive learning framework incorporates a modality-adaptive encoder to handle multi-modal variations while maintaining shared feature representations. Furthermore, the model is trained by combined supervised consistency learning and unsupervised pseudo-label guided contrastive learning, promoting a more discriminative and compact shared latent space. Extensive experiments and ablation studies on an orbital and a cardiac dataset reveal the effectiveness of each component and significant advancement in segmentation results compared to other reference methods.
Primary Subject Area: Unsupervised Learning and Representation Learning
Secondary Subject Area: Segmentation
Registration Requirement: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 120
Loading