Abstract: Unsupervised domain adaptation transfers knowledge from a fully labeled source domain to a different target domain, where no labeled data are available. Some researchers have proposed upper bounds for the target error when transferring knowledge. For example, Ben-David et al. (2010) established a theory based on minimizing the source error and distance between marginal distributions simultaneously. However, in most research, the joint error is ignored because of its intractability. In this research, we argue that joint errors are essential for domain adaptation problems, particularly when the domain gap is large. To address this problem, we propose a novel objective related to the upper bound of the joint error. Moreover, we adopt a source/pseudo-target label-induced hypothesis space that can reduce the search space to further tighten this bound. To measure the dissimilarity between hypotheses, we define a novel cross-margin discrepancy to alleviate instability during adversarial learning. In addition, we present extensive empirical evidence showing that the proposed method boosts the performance of image classification accuracy on standard domain adaptation benchmarks.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=eOh0SICZOb¬eId=rtuKAx7tHq
Changes Since Last Submission: - We re-organize the paper's structure and move some math demonstrations to appendix
- We modify some math notations for consistency and highlight those which are important to understand the proposal
- We clearly illustrate the theoretical assumption and conduct an additional experiment to support its validity in practice
- We add some detailed descriptions to the proposed algorithm
Code: https://drive.google.com/file/d/1-UTT8TP-3LBVUql109EZS9PwDidMqWLb/view?usp=sharing
Assigned Action Editor: ~Brian_Kulis1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 944
Loading