Channel Normalization in Convolutional Neural Network avoids Vanishing GradientsDownload PDF

Published: 04 Jun 2019, Last Modified: 14 Oct 2024ICML Deep Phenomena 2019Readers: Everyone
Keywords: normalization, optimization, vanishing gradients, convolutional neural network
Abstract: Normalization layers are widely used in deep neural networks to stabilize training. In this paper, we consider the training of convolutional neural networks with gradient descent on a single training example. This optimization problem arises in recent approaches for solving inverse problems such as the deep image prior or the deep decoder. We show that for this setup, channel normalization, which centers and normalizes each channel individually, avoids vanishing gradients, whereas without normalization, gradients vanish which prevents efficient optimization. This effect prevails in deep single-channel linear convolutional networks, and we show that without channel normalization, gradient descent takes at least exponentially many steps to come close to an optimum. Contrary, with channel normalization, the gradients remain bounded, thus avoiding exploding gradients.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/channel-normalization-in-convolutional-neural/code)
1 Reply

Loading