Less is More: Adaptive Coverage Sampling for Synthetic Training Data

ICLR 2026 Conference Submission20759 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Synthetic Data Generation, Sampling Algorithms, Maximum Coverage Problem, Data Efficiency
Abstract: Synthetic training data generation with Large Language Models (LLMs) offer a promising solution to the challenge of obtaining large, labeled datasets for training classifiers. When rapid model deployment is critical, such as in classifying emerging social media trends or combating new forms of online abuse tied to current events, the ability to generate training data is invaluable. While prior research has examined the comparability of synthetic data to human-labeled data, this study introduces a novel sampling algorithm, based on the maximum coverage problem, to select a representative subset from a synthetically generated dataset. Our results demonstrate that training a classifier on this contextually sampled subset achieves superior performance compared to training on the entire dataset. This ``less is more'' approach not only improves model accuracy but also reduces the volume of data required, leading to potentially more efficient model fine-tuning.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 20759
Loading