Shift-Driven Learning for Unsupervised Domain Adaptation

Published: 2025, Last Modified: 27 Jan 2026ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Self-training is widely used in unsupervised domain adaptation (UDA) by assigning pseudo labels to unlabeled samples. However, existing self-training strategies bring bias, while potentially inaccurate pseudo labels may accumulate errors during self-training (self-training shift) and the inability to accurately distinguish features may bring prediction bias (class shift). To address these issues, we propose Shift-Driven Learning (SDL). First, we decouple the generation and utilization of pseudo labels to mitigate the direct error accumulation. Second, we measure the maximum training shift of data, where the classifier achieves high accuracy on labeled data while making as many mistakes as possible on unlabeled data. Then we adversarially optimize the feature representations generation to indirectly decrease the self-training shift. Third, we minimize the class shift by data rearrangement strategy and joint contrastive learning, which find class-level discriminative feature representations. Extensive experiments justify that SDL outperforms SOTA methods on three UDA datasets with considerable gains.
Loading