Abstract: Recent source-free domain adaptation (SFDA) methods have focused on learning meaningful cluster structures in feature space, successfully adapting the knowledge from the source domain to the unlabeled target domain without accessing the private source data. However, existing methods rely on pseudo-labels generated by source models that can be noisy due to domain shift, presenting a significant challenge to their efficacy. In this paper, we study SFDA from the perspective of learning with label noise (LLN) and prove that the label noise in SFDA, unlike in conventional LLN scenarios, follows a different distribution assumption. This discrepancy renders some existing LLN methods less effective in SFDA. To address this issue and comprehensively improve adaptation performance, we tackle label noise in SFDA from two perspectives. First, we demonstrate that the early-time training phenomenon (ETP), previously observed in LLN settings, still exists in SFDA. Hence, we introduce a simple yet effective approach to leveraging ETP to improve current SFDA algorithms. Second, we propose a noise and variance control module, mitigating the label noise discrepancy between SFDA and LLN and enhancing the effectiveness of LLN methods in SFDA. Extensive empirical evaluation and analysis of four benchmarks show that our methods substantially outperform existing baselines.
Loading