Keywords: image classification, semi-supervised, domain adaptation
TL;DR: A strong method for semi-supervised domain adaptation that unlike most recent state of the art approaches does not rely on explicit domain alignment
Abstract: Most modern unsupervised domain adaptation (UDA) approaches are rooted in domain alignment, i.e., learning to align source and target features to learn a target domain classifier using source labels. In semi-supervised domain adaptation (SSDA), when the learner can access few target domain labels, prior approaches have followed UDA theory to use domain alignment for learning. We show that the case of SSDA is different and a good target classifier can be learned without needing explicit alignment. We use self-supervised pretraining and consistency regularization to achieve well separated target clusters, aiding in learning a low error target classifier, allowing our method to outperform recent state of the art approaches on large, challenging benchmarks like DomainNet and VisDA-17.