How to Synthesize Text Data without Model Collapse?

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-SA 4.0
Abstract: Model collapse in synthetic data indicates that iterative training on self-generated data leads to a gradual decline in performance. With the proliferation of AI models, synthetic data will fundamentally reshape the web data ecosystem. Future GPT-$\{n\}$ models will inevitably be trained on a blend of synthetic and human-produced data. In this paper, we focus on two questions: what is the impact of synthetic data on language model training, and how to synthesize data without model collapse? We first pre-train language models across different proportions of synthetic data, revealing a negative correlation between the proportion of synthetic data and model performance. We further conduct statistical analysis on synthetic data to uncover distributional shift phenomenon and over-concentration of n-gram features. Inspired by the above findings, we propose token editing on human-produced data to obtain semi-synthetic data. As a proof of concept, we theoretically demonstrate that token-level editing can prevent model collapse, as the test error is constrained by a finite upper bound. We conduct extensive experiments on pre-training from scratch, continual pre-training, and supervised fine-tuning. The results validate our theoretical proof that token-level editing improves data quality and enhances model performance.
Lay Summary: This work tackles the problem of “model collapse,” where training AI models on synthetic data leads to worse performance over time. We show that synthetic data lacks the diversity of human-written content, which causes models to degrade. To solve this, we propose token-level editing, a simple technique that tweaks parts of the data while keeping its structure. This method prevents model collapse and improves AI performance across various tasks.
Link To Code: https://github.com/Xuekai-Zhu/toedit
Primary Area: Deep Learning
Keywords: synthetic data, model collapse
Submission Number: 6140
Loading