Abstract: The normal distribution plays a central role in information theory – it is at the same time the best-case signal and worst-case noise distribution, has the greatest representational capacity of any distribution, and offers an equivalence between uncorrelatedness and independence for joint distributions. Accounting for the mean and variance of activations throughout the layers of deep neural networks has had a significant effect on facilitating their effective training, but seldom has a prescription for precisely what distribution these activations should take, and how this might be achieved, been offered. Motivated by the information-theoretic properties of the normal distribution, we address this question and concurrently present normality normalization: a novel normalization layer which encourages normality in the feature representations of neural networks using the power transform and employs additive Gaussian noise during training. Our experiments comprehensively demonstrate the effectiveness of normality normalization, in regards to its generalization performance on an array of widely used model and dataset combinations, its strong performance across various common factors of variation such as model width, depth, and training minibatch size, its suitability for usage wherever existing normalization layers are conventionally used, and as a means to improving model robustness to random perturbations.
Lay Summary: Successfully training deep neural networks depends greatly on how data is represented, as it is processed through the layers of a network. Up until now, controlling the average and spread of these representations was the main approach used to help neural networks train effectively. In this work, we furthermore motivated a specific distribution that neural network representations should follow, and materialized this choice of distribution using a new layer we developed. Our experiments and analysis comprehensively demonstrated the effectiveness of this new layer.
Link To Code: https://github.com/DanielEftekhari/normality-normalization
Primary Area: Deep Learning->Other Representation Learning
Keywords: mutual information game, gaussianization, power transform, noise robustness, normalization layer, deep learning, information theory
Submission Number: 7705
Loading