Learning with Little Data: Evaluation of Deep Learning AlgorithmsDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Deep learning has become a widely used tool in many computational and classification problems. Nevertheless obtaining and labeling data, which is needed for strong results, is often expensive or even not possible. In this paper three different algorithmic approaches to deal with limited access to data are evaluated and compared to each other. We show the drawbacks and benefits of each method. One successful approach, especially in one- or few-shot learning tasks, is the use of external data during the classification task. Another successful approach, which achieves state of the art results in semi-supervised learning (SSL) benchmarks, is consistency regularization. Especially virtual adversarial training (VAT) has shown strong results and will be investigated in this paper. The aim of consistency regularization is to force the network not to change the output, when the input or the network itself is perturbed. Generative adversarial networks (GANs) have also shown strong empirical results. In many approaches the GAN architecture is used in order to create additional data and therefor to increase the generalization capability of the classification network. Furthermore we consider the use of unlabeled data for further performance improvement. The use of unlabeled data is investigated both for GANs and VAT.
Keywords: semi-supervised learning, generative models, few shot learning
TL;DR: Comparison of siamese neural networks, GANs, and VAT for few shot learning.
7 Replies

Loading