Efficient Online Data Mixing For Language Model Pre-Training

Published: 01 Nov 2023, Last Modified: 12 Dec 2023R0-FoMo SpotlightEveryoneRevisionsBibTeX
Keywords: data mixing, large language model, pretraining, data efficiency, efficient
TL;DR: This work proposes an online data mixing algorithm that leads to a 30% reduction in pretraining iterations while improving on downstream tasks.
Abstract: The data used to pretrain large language models has a decisive impact on a model’s downstream performance, which has led to a large body of work on data selection methods that aim to automatically determine the most suitable data to use for pretraining. Existing data selection methods suffer from slow and computationally expensive processes, a problem amplified by the increasing size of models and of pretraining datasets. Data mixing, on the other hand, reduces the complexity of data selection by grouping data points together and determining sampling probabilities across entire groups. However, data mixing proportions are typically fixed before training and therefore cannot adapt to changing training dynamics. To address these limitations, we develop an efficient algorithm for Online Data Mixing (ODM) that combines elements from both data selection and data mixing. Based on multi-armed bandit algorithms, our online approach optimizes the data mixing proportions during training. Remarkably, our method trains a model that reaches the final perplexity of the next best method with 19% fewer training iterations, and improves performance on the 5-shot MMLU benchmark by 1.9% relative accuracy, while adding negligible wall-clock time during pretraining.
Submission Number: 67
Loading