On skip connections and normalisation layers in deep optimisation

Published: 21 Sept 2023, Last Modified: 03 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: optimisation, optimization, skip, connection, normalisation, normalization, deep, learning, polyak, lojasiewicz, lipschitz
Abstract: We introduce a general theoretical framework, designed for the study of gradient optimisation of deep neural networks, that encompasses ubiquitous architecture choices including batch normalisation, weight normalisation and skip connections. Our framework determines the curvature and regularity properties of multilayer loss landscapes in terms of their constituent layers, thereby elucidating the roles played by normalisation layers and skip connections in globalising these properties. We then demonstrate the utility of this framework in two respects. First, we give the only proof of which we are aware that a class of deep neural networks can be trained using gradient descent to global optima even when such optima only exist at infinity, as is the case for the cross-entropy cost. Second, we identify a novel causal mechanism by which skip connections accelerate training, which we verify predictively with ResNets on MNIST, CIFAR10, CIFAR100 and ImageNet.
Supplementary Material: zip
Submission Number: 3333
Loading