More Effective Distributed Deep Learning Using Staleness Based Parameter Updating

Yan Ye, Mengqiang Chen, Zijie Yan, Weigang Wu, Nong Xiao

Published: 2018, Last Modified: 15 Mar 2026ICA3PP (2) 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning technology has been widely applied for various purposes, especially big data analysis. However, computation required for deep learning is getting more complex and larger. In order to accelerate the training of large-scale deep networks, various distributed parallel training protocols have been proposed. In this paper, we design a novel asynchronous training protocol, Weighted Asynchronous Parallel (WASP), to update neural network parameters in a more effective way. The core of WASP is “gradient staleness”, a parameter version number based metric to weight gradients and reduce the influence of the stale parameters. Moreover, by periodic forced synchronization of parameters, WASP combines the advantages of synchronous and asynchronous training models and can speed up training with a rapid convergence rate. We conduct experiments using two classical convolutional neural networks, LeNet-5 and ResNet-101, at the Tianhe-2 supercomputing system, and the results show that, WASP can achieve much higher acceleration than existing asynchronous parallel training protocols.
Loading