Keywords: Large language models, Tabular data, Data generation, Data quality control
TL;DR: We introduces team-then-trim, a framework that synthesizes high-quality tabular data through a collaborative team of LLMs, followed by a rigorous data quality control pipeline.
Abstract: While tabular data is fundamental to many real-world machine learning (ML) applications, acquiring high-quality tabular data is usually labor-intensive and expensive. Limited by the scarcity of observations, tabular datasets often exhibit critical deficiencies, such as class imbalance, selection bias and low fidelity. To address these challenges, building on recent advances in Large Language Models (LLMs), this paper introduces team-then-trim, a framework that synthesizes high-quality tabular data through a collaborative team of LLMs, followed by a rigorous data quality control (QC) pipeline. In our framework, tabular data generation is conceptualized as a manufacturing process: specialized LLMs, guided by domain knowledge, are tasked with generating different data components sequentially, and the resulting products, i.e., the synthetic data, are systematically evaluated across multiple dimensions of QC. Empirical results on both simulated and real-world datasets demonstrate that our framework outperforms the state-of-the-art methods in producing high-quality tabular data, highlighting its potential to support downstream models when direct data collection is practically infeasible.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8446
Loading