Abstract: The default approach to the construction of pattern recognition models solving supervised learning problems is fitting the knowledge representation to the samples of a single problem, describing a single, static concept. In recent years, however, the transfer learning approach has been gaining popularity, consisting in training a model initially learned on a different, standard dataset to a specific problem. However, such solutions are mainly used for signal data, in applications typical of deep learning models. This paper proposes a new training procedure for tabular data, in which a serial ensemble of models based on neural networks is trained in parallel to recognize two different problems. In extensive experimental analysis, the approaches based on disjoint models, disjoint learning and - constituting the basic proposal of the work - inductive parallel learning of two problems were compared. The analysis carried out on synthetic and real-world problems shows that the proposed approach is a promising starting point for further research.
0 Replies
Loading