Initializing the Layer-wise Learning Rate

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: learning rate, exploding gradient, vanishing gradient, initialization
TL;DR: Adjust the layer-wise learning rate opposite to the gradient magnitude at initialization
Abstract: Weight initialization schemes have been devised with heavy emphasis in the initial training dynamics, assuming the optimizer automatically handles appropriate step sizes in prolonged training. The optimizer typically calculates the step sizes using a single, global learning rate across all parameters, focusing exclusively on the (exponentially averaged) in-training time gradient. Motivated from hierarchical structure inherent in deep networks, this work explores assigning non-adaptive layer-wise learning rates based on the differences in gradient magnitude at initialization as a practical and effective optimization strategy. The gradient magnitude used to preset the layer-wise learning rates is measured at fan-in initialization, as stable activation variance is considered a desirable property during training, and so is assumed to largely hold true in prolonged training. Experiments on convolutional and transformer architectures show the proposed layer-wise learning rate can improve training stability and convergence in image classification and autoregressive language modeling
Supplementary Material: zip
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8493
Loading