Spend Wisely: Maximizing Post-Training Gains in Iterative Synthetic Data Bootstrapping

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 spotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: synthetic data, self supervised fine-tuning, iterative learning, boostrapping, budget allocation
TL;DR: We analyze optimal budget allocation strategies for iterative bootstrapping with synthetic data to maximize model performance.
Abstract: Modern foundation models often undergo iterative ``bootstrapping'' in their post-training phase: a model generates synthetic data, an external verifier filters out low-quality samples, and the high-quality subset is used for further fine-tuning. Over multiple iterations, the model performance improves, raising a crucial question: How should the total budget for generation and training be allocated across iterations to maximize final performance? In this work, we develop a theoretical framework for analyzing budget allocation strategies. Specifically, we show that constant policies fail to converge with high probability, while increasing policies---particularly exponential growth policies---exhibit significant theoretical advantages. Experiments on image denoising with diffusion probabilistic models and math reasoning with large language models show that both exponential and polynomial growth policies consistently outperform constant policies, with exponential policies often providing more stable performance.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 14794
Loading