Unsupervised Domain Adaptation via Minimized Joint ErrorDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Unsupervised domain adaptation transfers knowledge from learned source domain to a different (but related) target distribution, for which only few or no labeled data is available. Some researchers proposed upper bounds for the target error when transferring the knowledge, i.e.,Ben-David et al. (2010) established a theory based on minimizing the source error and distance between marginal distributions simultaneously. However, in most works the joint error is usually ignored. In this paper, we argue that the joint error is essential for the domain adaptation problem, in particular if the samples from different classes in source/target are closely aligned when matching the marginal distributions. To tackle this problem, we propose a novel upper bound that includes the joint error. Moreover, we utilize a constrained hypothesis space to further tighten up this bound. Furthermore, we propose a novel cross margin discrepancy to measure the dissimilarity between hypotheses. We show in this paper that the new cross margin discrepancy is able to alleviate instability during adversarial learning. In addition, we present extensive empirical evidence that shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=UH6tYnKJdH
13 Replies

Loading