Nov 04, 2016 (modified: Mar 22, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:Neural Networks are function approximators that have achieved state-of-the-art accuracy in numerous machine learning tasks. In spite of their great success in terms of accuracy, their large training time makes it difficult to use them for various tasks. In this paper, we explore the idea of learning weight evolution pattern from a simple network for accelerating training of novel neural networks.
We use a neural network to learn the training pattern from MNIST classification and utilize it to accelerate training of neural networks used for CIFAR-10 and ImageNet classification. Our method has a low memory footprint and is computationally efficient. This method can also be used with other optimizers to give faster convergence. The results indicate a general trend in the weight evolution during training of neural networks.
TL;DR:Acceleration of training by performing weight updates, using knowledge obtained from training other neural networks.
Keywords:Computer vision, Deep learning, Optimization
Conflicts:iitk.ac.in, iitkgp.ac.in, adobe.com
Enter your feedback below and we'll get back to you as soon as possible.