TabKDE: Simple and Scalable Tabular Data Generation with Kernel Density Estimates

ICLR 2026 Conference Submission21784 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tabular Data Synthesis, Kernel Density Estimates, Coreset Compression, Copula Transformation, Generative model
TL;DR: Scalable tabular data generation using kernel density estimates and copula transformations, achieving near state-of-the-art accuracy and privacy with very short training time.
Abstract: Tabular data generation considers a large table with multiple columns -- each column comprised of numerical, categorical, or sometimes ordinal values. The goal is to produce new rows for the table that replicate the distribution of rows from the original data -- without just copying those initial rows. The last 3 years has seen enormous progress on this problem, mostly using computational expensive methods that employ one-hot encoding, VAEs, and diffusion. This paper describes a new approach to the problem of tabular data generation. By employing copula transformations and modeling the distribution as a kernel density estimate we can nearly match the accuracy and privacy-preservation achievements of the previous methods, but with almost no training time. Our method is very scalable, and can be run on data sets orders of magnitude larger than prior art on a simple laptop. Moreover, because we employ kernel density estimates, we can store the model as a coreset of the original data -- we believe the first for generative modeling -- and as a result, require significantly less space as well. Our code is available here: \url{http://github.com/tabkde/tabkde-main}
Primary Area: generative models
Submission Number: 21784
Loading