Robust Domain Adaptation with Noisy and Shifted Label Distribution
Abstract: Unsupervised Domain Adaptation (UDA)
intends to achieve excellent results by transferring
knowledge from labeled source domains to unlabeled target domains in which the data or label distribution changes. Previous UDA methods have
acquired great success when labels in the source
domain are pure. However, even the acquisition
of scare clean labels in the source domain needs
plenty of costs as well. In the presence of label
noise in the source domain, the traditional UDA
methods will be seriously degraded as they do not
deal with the label noise. In this paper, we propose
an approach named Robust Self-training with Label Refinement (RSLR) to address the above issue.
RSLR adopts the self-training framework by maintaining a Labeling Network (LNet) on the source
domain, which is used to provide confident pseudolabels to target samples, and a Target-specific Network (TNet) trained by using the pseudo-labeled
samples. To combat the effect of label noise, LNet
progressively distinguishes and refines the mislabeled source samples. In combination with class re-balancing to combat the label distribution shift
issue, RSLR achieves effective performance on extensive benchmark datasets.
Loading