Semi-supervised Domain Adaptation with Prototypical Alignment and Consistency LearningDownload PDF

28 Sept 2020 (modified: 22 Oct 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Domain adaptation enhances generalizability of a model across domains with domain shifts. Most research effort has been spent on Unsupervised Domain Adaption (UDA) which trains a model jointly with labeled source data and unlabeled target data. This paper studies how much it can help address domain shifts if we further have a few target samples (e.g., one sample per class) labeled. This is the so-called semi-supervised domain adaptation (SSDA) problem and the few labeled target samples are termed as ``landmarks''. To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks; source samples are then aligned with the target prototype from the same class. To further alleviate label scarcity, we propose a data augmentation based solution. Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability. Moreover, we apply consistency learning on unlabeled target images, by perturbing each image with light transformations and strong transformations. Then, the strongly perturbed image can enjoy ``supervised-like'' training using the pseudo label inferred from the lightly perturbed one. Experiments show that the proposed method, though simple, reaches significant performance gains over state-of-the-art methods, and enjoys the flexibility of being able to serve as a plug-and-play component to various existing UDA methods and improve adaptation performance with landmarks provided.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2104.09136/code)
Reviewed Version (pdf): https://openreview.net/references/pdf?id=R5ZkmN9J4i
6 Replies

Loading