Abstract: Unsupervised Domain Adaptation(UDA) aims to learn domain-invariant characteristics of source and target domains so that classifiers learned from the source domain can be applied to the unlabeled target domain. Now some methods trying to build an intermediate domain to alleviate the domains discrepancy. However, since the target samples are unlabeled, it will cause great confusion to the model if the proportion of source and target domain samples in the intermediate domain samples nearly equals. Moreover, the stylistic bias of the CNNs is often ignored. Therefore, we propose to gradually reduce the distribution discrepancy between domains by constructing continuous multiple intermediate domains. To address the style discrepancy between domains, we propose Self-Exchange and Cross-Exchange modules to reduce the style discrepancy between domains. In addition, we propose to focus on sample probability divergence to select reliable samples. Experiments show the proposed method has achieved significant performance improvement in several cross-domain benchmark tests.
0 Replies
Loading