Trajectory growth through random deep ReLU networksDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Deep networks, expressivity, trajectory growth, sparse neural networks
TL;DR: The expected trajectory growth of a random sparsely connected deep neural network is exponential in depth across many distributions including the default initialisations used in Tensorflow and Pytorch
Abstract: This paper considers the growth in the length of one-dimensional trajectories as they are passed through deep ReLU neural networks, which, among other things, is one measure of the expressivity of deep networks. We generalise existing results, providing an alternative, simpler method for lower bounding expected trajectory growth through random networks, for a more general class of weights distributions, including sparsely connected networks. We illustrate this approach by deriving bounds for sparse-Gaussian, sparse-uniform, and sparse-discrete-valued random nets. We prove that trajectory growth can remain exponential in depth with these new distributions, including their sparse variants, with the sparsity parameter appearing in the base of the exponent.
Original Pdf: pdf
11 Replies

Loading