Is the Number of Trainable Parameters All That Actually Matters?Download PDF

Published: 18 Oct 2021, Last Modified: 08 Sept 2024ICBINB@NeurIPS2021 SpotlightReaders: Everyone
Keywords: autoregressive transformers;structured transforms;random parameters;scaling laws;FastFood;Hadamard;block structured;generative pretrained transformers
TL;DR: The relationship between test loss and compute depends only on the actual number of trainable parameters, and cannot be deceived by spurious parameters.
Abstract: Recent work has identified simple empirical scaling laws for language models, linking compute budget, dataset size, model size, and autoregressive modeling loss. The validity of these simple power laws across orders of magnitude in model scale provides compelling evidence that larger models are also more capable models. However, scaling up models under the constraints of hardware and infrastructure is no easy feat, and rapidly becomes a hard and expensive engineering problem. We investigate ways to tentatively \emph{cheat} scaling laws, and train larger models for cheaper. We emulate an increase in effective parameters, using efficient approximations: either by \emph{doping} the models with frozen random parameters, or by using fast structured transforms in place of dense linear layers. We find that the scaling relationship between test loss and compute depends only on the \emph{actual} number of trainable parameters; scaling laws cannot be deceived by spurious parameters.
Category: Negative result: I would like to share my insights and negative results on this topic with the community
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/is-the-number-of-trainable-parameters-all/code)
1 Reply

Loading