Causality-Inspired Unsupervised Domain Adaptation With Target Style Imitation for Medical Image Segmentation

Published: 2025, Last Modified: 04 Nov 2025IEEE Trans. Circuits Syst. Video Technol. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning performance may decrease substantially with unseen heterogeneous data. While most unsupervised domain adaptation (UDA) methods seek to address this through image alignment, they often ignore uncertainty style fluctuations within the target domain. When testing image styles vary in both direction and intensity, such models may fail to adapt. Furthermore, existing UDA methods tend to over-reliance on domain-level entire feature alignment, resulting in potentially over-exploiting semantic content-independent cues (e.g., intensity) as shortcut features. To address these limitations, this paper introduces an innovative and model-agnostic Causality-inspired Representation Learning Based on Target Style Imitation method for UDA. Specifically, we propose a novel Target Style Imitation (TSI) data augmentation approach to diversify the training data and align training and unseen target testing image styles. TSI constructs a Gaussian distribution for the target domain style and simulates unseen testing style variations through random sampling. Additionally, inspired by the stable and generalizable causal mechanism, we propose Causality-inspired Representation Learning (CRL) based on TSI method to enforce feature representations to adhere to causal properties (i.e., Separation and Independence) essential for robust UDA, thus fostering the model to focus on the domain-invariant semantic features. Our method surpasses state-of-the-art methods on two cross-modality medical image segmentation datasets.
Loading