Guidelines for the Regularization of Gammas in Batch Normalization for Deep Residual Networks

Published: 01 Jan 2024, Last Modified: 28 Sept 2024ACM Trans. Intell. Syst. Technol. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: L2 regularization for weights in neural networks is widely used as a standard training trick. In addition to weights, the use of batch normalization involves an additional trainable parameter γ, which acts as a scaling factor. However, L2 regularization for γ remains an undiscussed mystery and is applied in different ways depending on the library and practitioner. In this article, we study whether L2 regularization for γ is valid. To explore this issue, we consider two approaches: (1) variance control to make the residual network behave like an identity mapping and (2) stable optimization through the improvement of effective learning rate. Through two analyses, we specify the desirable and undesirable γ to apply L2 regularization and propose four guidelines for managing them. In several experiments, we observed that applying L2 regularization to applicable γ increased 1% to 4% classification accuracy, whereas applying L2 regularization to inapplicable γ decreased 1% to 3% classification accuracy, which is consistent with our four guidelines. Our proposed guidelines were further validated through various tasks and architectures, including variants of residual networks and transformers.
Loading