Abstract: We analyze the joint probability distribution on the lengths of the
vectors of hidden variables in different layers of a fully connected
deep network, when the weights and biases are chosen randomly according to
Gaussian distributions, and the input is binary-valued. We show
that, if the activation function satisfies a minimal set of
assumptions, satisfied by all activation functions that we know that
are used in practice, then, as the width of the network gets large,
the ``length process'' converges in probability to a length map
that is determined as a simple function of the variances of the
random weights and biases, and the activation function.
We also show that this convergence may fail for activation functions
that violate our assumptions.
Keywords: theory, length map, initialization
TL;DR: We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map.
7 Replies
Loading