A Collaborative Learning Method based on Cross-domain Representation Correlation Congruence Distillation

Abstract: In unsupervised domain adaptation (UDA), learning of domain invariant representation is easily dominated by labeled source domain data, while the data distribution characteristic of target domain is often ignored. Although there were some studies reported on extraction of the inter-class relation of samples and reduce the classifier mistakes in the target domain, however, they still failed to solve the problem on shared knowledge distillation when domains were cross-transferring. In this paper, we propose a two-stage maximizing cross-domain invariant features adaptation method, designing a collaborative learning model based on divergence congruence among source and target domain representation correlations. In the first stage, we use generative adversarial network to extract the common features from unpaired images, and the positive migration of classifiers is created between two domain spaces. Secondly, in the mixing domain label space, and correlation efficiency is obtained, upon which uncertainty weight mechanism is used to strengthen the features to stand out from cross-domain prediction and suppress the negative migration in the meantime. With cross-domain knowledge, few-shot learning in the target domain can further promote the training of adaptive information, enhancing a collaborative effect across two domains. In the UDA scenario, two baseline datasets Image CLEF-DA and Office-home are subjected to domain adaptation experiments. Compared with the state-of-the-art algorithms, the average recognition accuracy reaches up to nearly 2% increase.
0 Replies
Loading