On the impact of activation and normalization in obtaining isometric embeddings at initialization

Published: 21 Sept 2023, Last Modified: 12 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: dynamical isometry, Lyapunov analysis, random neural networks
Abstract: In this paper, we explore the structure of the penultimate Gram matrix in deep neural networks, which contains the pairwise inner products of outputs corresponding to a batch of inputs. In several architectures it has been observed that this Gram matrix becomes degenerate with depth at initialization, which dramatically slows training. Normalization layers, such as batch or layer normalization, play a pivotal role in preventing the rank collapse issue. Despite promising advances, the existing theoretical results do not extend to layer normalization, which is widely used in transformers, and can not quantitatively characterize the role of non-linear activations. To bridge this gap, we prove that layer normalization, in conjunction with activation layers, biases the Gram matrix of a multilayer perceptron towards the identity matrix at an exponential rate with depth at initialization. We quantify this rate using the Hermite expansion of the activation function.
Supplementary Material: zip
Submission Number: 8667
Loading