TL;DR: We prove that unsupervised pre-training can dramatically reduce sample complexity in single-index models
Abstract: Unsupervised pre-training and transfer learning are commonly used techniques to initialize training algorithms for neural networks, particularly in settings with limited labeled data. In this paper, we study the effects of unsupervised pre-training and transfer learning on the sample complexity of high-dimensional supervised learning. Specifically, we consider the problem of training a single-layer neural network via online stochastic gradient descent. We establish that pre-training and transfer learning (under concept shift) reduce sample complexity by polynomial factors (in the dimension) under very general assumptions. We also uncover some surprising settings where pre-training grants exponential improvement over random initialization in terms of sample complexity.
Lay Summary: Pre-training and transfer learning are important concepts in machine learning that are used widely in practice. Despite their use, these concepts are not well understood theoretically. In this paper we provide rigorous justification for the use of pre-training and transfer learning. In particular, we consider a common statistical model of interest in high dimensional probability (the single index model) under which parameters can be estimated with Stochastic Gradient Descent. We consider pre-training and transfer learning scenarios which one may use to provide initial parameter estimates. We prove for a class of single index models, parameter estimation can be achieved with significantly lower sample complexity then with random initialization.
Primary Area: Theory->Learning Theory
Keywords: Single-Index Model, Pre-training, Transfer-learning, sample complexity, Stochastic Gradient Descent
Submission Number: 5376
Loading