Towards Active Synthetic Data Generation for Finetuning Language Models

ICLR 2026 Conference Submission13622 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Synthetic data generation, active learning, language models, supervised finetuning
TL;DR: Generating synthetic data conditioned on the student model enables more data efficient supervised finetuning.
Abstract: A common and effective means for improving language model capabilities involves finetuning a “student” language model’s parameters on generations from a more proficient “teacher” model. Termed “synthetic data”, these generations are often produced before any student finetuning, but some work has considered generating new synthetic samples as training progresses. This paper studies and advocates for the latter case, where data are generated in an iterative, closed-loop fashion that is guided by the current state of the student model. For a fixed budget of generated samples, or a budget in terms of compute spent querying a teacher, we show that this curation of finetuning data affords improved student performance over static generation. Further, while there have been several LLM-specific methods proposed that operate in this regime, we find that simple, inexpensive selection criteria from the active learning literature tend to be most performant. We validate these claims across four mathematical and logical reasoning datasets using four different small language models.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 13622
Loading