Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
GPU Asynchronous Stochastic Gradient Descent to Speed Up Neural Network Training
Hailin Jin, Thomas Huang, Zhe Lin, Jianchao Yang, Thomas Paine
Dec 25, 2013 (modified: Dec 25, 2013)ICLR 2014 workshop submissionreaders: everyone
Decision:submitted, no decision
Abstract:The ability to train large-scale neural networks has resulted in state-of-the-art performance in many areas of computer vision. These results have largely come from computational break throughs of two forms: model parallelism, e.g. GPU accelerated training, which has seen quick adoption in computer vision circles, and data parallelism, e.g. A-SGD, whose large scale has been used mostly in industry. We report early experiments with a system that makes use of both model parallelism and data parallelism, we call GPU A-SGD. We show using GPU A-SGD it is possible to speed up training of large convolutional neural networks useful for computer vision. We believe GPU A-SGD will make it possible to train larger networks on larger training sets in a reasonable amount of time.
Enter your feedback below and we'll get back to you as soon as possible.