Gradients explode - Deep Networks are shallow - ResNet explainedDownload PDF

12 Feb 2018 (modified: 21 Mar 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: deep learning, MLP, ResNet, residual network, exploding gradient problem, vanishing gradient problem, effective depth, batch normalization, covariate shift
TL;DR: We show that in contras to popular wisdom, the exploding gradient problem has not been solved and that it limits the depth to which MLPs can be effectively trained. We show why gradients explode and how ResNet handles them.
Abstract: Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities "solve" the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why exploding gradients occur and highlight the *collapsing domain problem*, which can arise in architectures that avoid exploding gradients. ResNets have significantly lower gradients and thus can circumvent the exploding gradient problem, enabling the effective training of much deeper networks, which we show is a consequence of a surprising mathematical property. By noticing that *any neural network is a residual network*, we devise the *residual trick*, which reveals that introducing skip connections simplifies the network mathematically, and that this simplicity may be the major cause for their success.
1 Reply

Loading