Collapse of deep and narrow neural netsDownload PDF

27 Sept 2018 (modified: 22 Oct 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Recent theoretical work has demonstrated that deep neural networks have superior performance over shallow networks, but their training is more difficult, e.g., they suffer from the vanishing gradient problem. This problem can be typically resolved by the rectified linear unit (ReLU) activation. However, here we show that even for such activation, deep and narrow neural networks (NNs) will converge to erroneous mean or median states of the target function depending on the loss with high probability. Deep and narrow NNs are encountered in solving partial differential equations with high-order derivatives. We demonstrate this collapse of such NNs both numerically and theoretically, and provide estimates of the probability of collapse. We also construct a diagram of a safe region for designing NNs that avoid the collapse to erroneous states. Finally, we examine different ways of initialization and normalization that may avoid the collapse problem. Asymmetric initializations may reduce the probability of collapse but do not totally eliminate it.
TL;DR: Deep and narrow neural networks will converge to erroneous mean or median states of the target function depending on the loss with high probability.
Keywords: neural networks, deep and narrow, ReLU, collapse
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1808.04947/code)
11 Replies

Loading