Abstract: High-quality data plays a critical role in the pretraining and fine-tuning of large language models (LLMs), even determining their performance ceiling to some degree. Consequently, numerous data selection methods have been proposed to identify subsets of data that can effectively and efficiently enhance model performance. However, most of these methods focus on general data selection and tend to overlook the specific nuances of domain-related data. In this paper, we introduce MASS, a Mathematical data Selection framework using the Skill graph for pretraining LLMs in the mathematical reasoning domain. By taking into account the unique characteristics of mathematics and reasoning, we construct a skill graph that captures the mathematical skills and their interrelations from a reference dataset. This skill graph guides us in assigning quality scores to the target dataset, enabling us to select the top-ranked subset which is further used to pretrain LLMs. Experimental results demonstrate the efficiency and effectiveness of MASS across different model sizes (1B and 7B) and pretraining datasets (web data and synthetic data). Specifically, in terms of efficiency, models trained on subsets selected by MASS can achieve similar performance to models trained on the original datasets, with a significant reduction in the number of trained tokens - ranging from 50\% to 70\% fewer tokens. In terms of effectiveness, when trained on the same amount of tokens, models trained on the data selected by MASS outperform those trained on the original datasets by 3.3\% to 5.9\%. These results underscore the potential of MASS to improve both the efficiency and effectiveness of pretraining LLMs.
Lay Summary: General-purpose language models (LMs) are designed as generalists rather than specialists, meaning they lack deep expertise in specific domains such as mathematics. To adapt these models for domain-specific tasks (e.g., solving mathematical problems), conventional approaches require continued pretraining on vast amounts of domain-related text—a process that is both computationally expensive and time-consuming. A key inefficiency lies in the sheer scale of the training corpus, which often contains redundant or low-value samples.
To address this inefficiency, we propose a novel method that significantly reduces the required training data and time (by over 50%) while achieving comparable or superior model performance. Our core insight is that existing pretraining datasets contain substantial redundancy, including repetitive or irrelevant samples (e.g., trivial math operations or unrelated skills). To systematically identify high-quality data, we introduce a skill graph that quantifies the importance of mathematical skills and their interdependencies. This graph serves as a guide for selecting the most informative subset of training data, enabling efficient domain adaptation with minimal computational overhead.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/lijiazheng0917/MASS
Primary Area: Deep Learning->Large Language Models
Keywords: Large language models, Data selection, Pre-training
Submission Number: 7856
Loading