Keywords: large language models (LLMs), pretraining, experiments, memorization
TL;DR: We show that it is possible to conduct multiple pretraining experiments during the training of a single LLM.
Abstract: Recent work has demonstrated that controlled pretraining experiments are a powerful tool for studying
the relationship between training data and large language model (LLM) behavior.
However, the computational cost of pretraining presents a significant constraint. To overcome this constraint, we propose a new approach where multiple experiments are conducted simultaneously during a *single* training run. We validate our approach by performing ten experiments while training on 210B tokens, with models of up to 2.7B parameters. Although models are trained only once, we can replicate the results of multiple previous works on data contamination, poisoning, and memorization. We also conduct novel investigations into knowledge acquisition, mathematical reasoning, and watermarking. For example, we dynamically update the training data until a model acquires a particular piece of knowledge. Remarkably, the influence of the experiments on the model's training dynamics and overall performance is minimal. However, interactions between experiments may act as a confounder in our approach. We propose continual pretraining dependence testing (CPDT), a novel technique to test for interactions with continual pretraining experiments, finding them to be negligible in our setup. Overall, our results suggest that performing multiple pretraining experiments within a single training run can enable rigorous scientific experimentation with large models on a compute budget.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 12814
Loading