- Abstract: In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufﬁcient, for the generalization ability of the classiﬁer is poor in this circumstance. To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner. The main idea of existing transfer learning algorithms is to reduce the dierence between domains by sample selection or domain adaptation. However, no matter what transfer learning algorithm we use, the difference always exists and the hybrid training of source and target data leads to reducing ﬁtting capability of the learner on target domain. Moreover, when the relatedness between domains is too low, negative transfer is more likely to occur. To tackle the problem, we proposed a two-phase transfer learning architecture based on ensemble learning, which uses the existing transfer learning algorithms to train the weak learners in the ﬁrst stage, and uses the predictions of target data to train the ﬁnal learner in the second stage. Under this architecture, the ﬁtting capability and generalization capability can be guaranteed at the same time. We evaluated the proposed method on public datasets, which demonstrates the effectiveness and robustness of our proposed method.
- Keywords: data diversiﬁcation, domain adaptation, transfer learning, stacked generalization
- TL;DR: How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available.