Keywords: warmstarting, hyperparameter optimization, language models, deep learning, scaling
TL;DR: A method to warmstart the training of a large model with optimal hyperparameters, given a tuned, trained smaller model to improve pretraining convergence.
Abstract: Scaling model sizes to scale performance has worked remarkably well for the current large language models paradigm.
The research and empirical findings of various scaling studies led to novel scaling results and laws that guides subsequent research.
However, prohibitively high training costs at contemporary scales of data and models result in a lack of thorough understanding of how to tune and arrive at such training setups efficiently.
One direction to ameliorate the cost of pretraining large models is to *warmstart* the large-scale training from smaller models that are cheaper to tune.
In this work, we attempt to understand if the behavior of optimal hyperparameters can be retained under warmstarting for scaling.
We explore simple operations that allow the application of theoretically motivated methods of zero-shot transfer of optimal hyperparameters using $\mu$Transfer.
We investigate the aspects that contribute to the speedup in convergence and the preservation of stable training dynamics under warmstarting with $\mu$Transfer.
We find that shrinking smaller model weights, zero-padding, and perturbing the resulting larger model with scaled initialization from $\mu$P enables effective warmstarting of $\mu$Transfer.
Submission Number: 120
Loading