TL;DR: We propose to prepend metadata information (such as URLs) to pre-training documents. Our method, metadata conditioning then cooldown (MeCo), significantly accelerates pre-training.
Abstract: The vast diversity of styles, domains, and quality levels present in language model pre-training corpora is essential in developing general model capabilities, but efficiently learning and deploying the correct behaviors exemplified in each of these heterogeneous data sources is challenging. To address this, we propose a new method, termed Metadata Conditioning then Cooldown (MeCo), to incorporate additional learning cues during pre-training. MeCo first provides metadata (e.g., URLs like en.wikipedia.org) alongside the text during training and later uses a cooldown phase with only the standard text, thereby enabling the model to function normally even without metadata. MeCo significantly accelerates pre-training across different model scales (600M to 8B parameters) and training sources (C4, RefinedWeb, and DCLM). For instance, a 1.6B language model trained with MeCo matches the downstream task performance of standard pre-training while using 33% less data. Additionally, MeCo enables us to steer language models by conditioning the inference prompt on either real or fabricated metadata that encodes the desired properties of the output: for example, prepending wikipedia.org to reduce harmful generations or factquizmaster.com (fabricated) to improve common knowledge task performance. We also demonstrate that MeCo is compatible with different types of metadata, such as model-generated topics. MeCo is remarkably simple, adds no computational overhead, and demonstrates promise in producing more capable and steerable language models.
Lay Summary: Diversity in data is essential for training versatile language models, but it also poses unique challenges for models to efficiently learn and deploy correct behaviors. We propose a new method, termed Metadata Conditioning then Cooldown (MeCo), to incorporate additional learning cues during pre-training. MeCo first prepends metadata (e.g., URLs like en.wikipedia.org) to the text during training and later uses a cooldown phase with only the standard text, thereby enabling the model to function normally even without metadata. MeCo significantly accelerates pre-training across different model scales and training sources, by up to 33%. Additionally, MeCo enables us to steer language models by conditioning the inference prompt on either real or fabricated URLs that encode the desired properties of the output: for example, prepending factquizmaster.com (fabricated) to improve common knowledge task performance. We also demonstrate that MeCo is compatible with different types of metadata, such as model-generated topics. MeCo is remarkably simple, adds no computational overhead, and demonstrates promise in producing more capable and steerable language models.
Link To Code: https://github.com/princeton-pli/MeCo
Primary Area: Deep Learning->Large Language Models
Keywords: language models, pre-training, metadata conditioning, acceleration, URLs
Submission Number: 11833
Loading