A DIRT-T Approach to Unsupervised Domain Adaptation


Nov 07, 2017 (modified: Nov 07, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Domain adaptation refers to the problem of how to leverage labels in one source domain to boost up learning performance in a new target domain where labels are scarcely available or completely unavailable. A recent approach for finding a common representation of the two domains is via domain adversarial training (Ganin & Lempitsky, 2015), which attempts to induce a feature extractor that matches the source and target feature distributions in some feature space. However, domain adversarial training faces two critical limitations: 1) if the feature extraction function has high-capacity, feature distribution matching is under-constrained, 2) in the non-conservative domain adaptation setting, where no single classifier can perform well jointly in both the source and target domains, domain adversarial training is over-constrained. In this paper, we address these issues through the lense of the cluster assumption, i.e., decision boundaries should not cross high-density data regions. We propose two novel and related models: (1) the Virtual Adversarial Domain Adaptation (VADA) model, which combines domain adversarial training with a penalty term that punishes the violation the cluster assumption; (2) the Decision-boundary Iterative Refinement Training with a Teacher (DIRT-T)1 model, which takes the VADA model as initialization and employs natural gradient steps to further minimize the cluster assumption violation. Extensive empirical results demonstrate that the combination of these two models significantly improve the state-of-the-art performance on several visual domain adaptation benchmarks.
  • TL;DR: SOTA on unsupervised domain adaptation by leveraging the cluster assumption.
  • Keywords: domain adaptation, unsupervised learning, semi-supervised learning