Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Revisiting Distributed Synchronous SGD
Jianmin Chen*, Xinghao Pan*, Rajat Monga, Samy Bengio, Rafal Jozefowicz
Nov 01, 2016 (modified: Dec 15, 2016)ICLR 2017 conference submissionreaders: everyone
Abstract:Distributed training of deep learning models on large-scale training data is typically conducted with asynchronous stochastic optimization to maximize the rate of updates, at the cost of additional noise introduced from asynchrony. In contrast, the synchronous approach is often thought to be impractical due to idle time wasted on waiting for straggling workers. We revisit these conventional beliefs in this paper, and examine the weaknesses of both approaches. We demonstrate that a third approach, synchronous optimization with backup workers, can avoid asynchronous noise while mitigating for the worst stragglers. Our approach is empirically validated and shown to converge faster and to better test accuracies.
TL;DR:We proposed distributed synchronous stochastic optimization with backup workers, and show that it converge faster and to better test accuracies.
Keywords:Optimization, Deep learning, Applications
Enter your feedback below and we'll get back to you as soon as possible.