Associate NormalizationDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Normalization is a key technique for training deep neural networks. It improves the stability of the training process and thus makes the networks easier to train. However, in typical normalization methods, the rescaling parameters that control the mean and variance of the output do not associate with any input information during the forward phase. Therefore, inputs of different types are treated as from the exact same distribution, which may limit the feature expressiveness of normalization module. We present Associate Normalization (AssocNorm) to overcome the above limitation. AssocNorm extracts the key information from input features and connects them with rescaling parameters by an auto-encoder-like neural network in the normalization module. Furthermore, AssocNorm normalizes the features of each example individually, so the accuracy is relatively stable for different batch sizes. The experimental results show that AssocNorm achieves better performance than Batch Normalization on several benchmark datasets under various hyper-parameter settings.
4 Replies

Loading