Keywords: Batch Normalization, Deep Learning
TL;DR: Reduce Skewness
Abstract: Batch Normalization (BN) is a well-known technique used in training deep neural networks.
The main idea behind batch normalization is to normalize the features of the layers ($i.e.$, transforming them to have a mean equal to zero and a variance equal to one).
Such a procedure encourages the optimization landscape of the loss function to be smoother, and improve the learning of the networks for both speed and performance.
In this paper,
we demonstrate that the performance of the network can be improved,
if the distributions of the features of the output in the same layer are similar.
As normalizing based on mean and variance does not necessarily make the features to have the same distribution, we propose a new normalization scheme: Batch Normalization with Skewness Reduction (BNSR).
Comparing with other normalization approaches,
BNSR transforms not just only the mean and variance,
but also the skewness of the data.
By tackling this property of a distribution, we are able to make the output distributions of the layers to be further similar. The nonlinearity of BNSR may further improve the expressiveness of the underlying network.
Comparisons with other normalization schemes are tested on the CIFAR-100 and ImageNet datasets. Experimental results show that the proposed approach can outperform other state-of-the-arts that are not equipped with BNSR.
Data: [CIFAR-100](https://paperswithcode.com/dataset/cifar-100)
Original Pdf: pdf
4 Replies
Loading