Keywords: Evaluation, Synthetic Data
TL;DR: We introduce the SynQuE problem: estimating quality of synthetic datasets anchored on real unannotated data alongside baselines and our novel LENS metric.
Abstract: We introduce and formalize the Synthetic Dataset Quality Estimation (SYNQUE) problem: ranking synthetic datasets by their expected real-world task performance using only limited unannotated real data.
This addresses a critical and open challenge where data is scarce due to collection costs or privacy constraints.
We establish the first comprehensive benchmarks for this problem by introducing and evaluating proxy metrics that choose synthetic data for training to maximize task performance on real data.
We introduce the first proxy metrics for SYNQUE by adapting distribution and diversity-based distance measures to our context via embedding models.
To address the shortcomings of these metrics on complex planning tasks, we propose SYNQUE, a novel proxy that leverages large language model reasoning.
Our results show that SYNQUE proxies correlate with real task performance across diverse tasks, including sentiment analysis, Text2SQL, web navigation, and image classification, with LENS consistently outperforming others on complex tasks by capturing nuanced characteristics.
For instance, on text-to-SQL parsing, training on the top-3 synthetic datasets selected via SYNQUE proxies can raise accuracy from 30.4\% to 38.4 (+8.1)\% on average compared to selecting data indiscriminately.
This work establishes SYNQUE as a practical framework for synthetic data selection under real-data scarcity and motivates future research on foundation model-based data characterization and fine-grained data selection.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 15544
Loading