Abstract: Multi-target learning is a prediction task where each example is associated with multiple target variables (outputs) simultaneously. One of the challenges in this research field is related to the high dimensionality of the data and the high number of target variables with dependencies. In such scenarios, it is crucial to extract lower dimensional representations from the original input space, such that these can be provided as input to other multi-target predictors. In this paper, we proposed using Autoencoders as feature extractors in several multi-target classification datasets publicly available. Results were evaluated considering state-of-the-art multi-target classification methods and evaluation measures in the literature. The experiments showed that the neural networks were able to keep the predictive performance even when the extracted features corresponded to a dimension size equivalent to 10% of the original number of features and, in some cases, getting better results than when using the original datasets.
0 Replies
Loading