Augmented Self-Labeling for Source-Free Unsupervised Domain AdaptationDownload PDF

Anonymous

Jul 26, 2021ICCV 2021 Workshop VIPriors Blind SubmissionReaders: Everyone
  • Keywords: Source-Free Unsupervised Domain Adaptation, Self-Labeling, Model Regularization
  • TL;DR: This paper proposed a source-free unsupervised domain adaptation method, Augmented Self-Labeling (ASL) by jointly optimizing model and labels for target data starting from the source model.
  • Abstract: Unsupervised domain adaptation aims to learn a model generalizing on target domain given labeled source data and unlabeled target data. However, source data sometimes may be unavailable when considering data privacy and decentralized learning architecture. In this paper, we address the source-free unsupervised domain adaptation problem where only the trained source model and unlabeled target data are given. To this end, we propose an Augmented Self-Labeling (ASL) method jointly optimizing model and labels for target data starting from the source model. This includes two alternating steps, where augmented self-labeling improves pseudo-labels via solving an optimal transport problem with Sinkhorn-Knopp algorithm, and model re-training trains the model with the supervision of improved pseudo-labels. We further introduce model regularization terms to improve the model re-training. Experiments show that our method can achieve comparable or better results than the state-of-the-art methods on the standard benchmarks.
1 Reply

Loading