Keywords: Synthetic Tabular Data, In-Context Learning, Bias Propagation, Fairness in Machine Learning
TL;DR: Biases in in-context examples can propagate through LLM-generated tabular data, impacting fairness and enabling adversarial manipulation.
Abstract: Large Language Models (LLMs) are increasingly used for synthetic tabular data generation through in-context learning (ICL), offering a practical solution for data augmentation in data scarce scenarios. While prior work has shown the LLM potential to improve downstream performance through augmenting underrepresented groups, these benefits often assume access to a subset of in-context examples unbiased and representative of the real dataset. In real-world settings, however, data is frequently noisy and demographically skewed. In this paper, we systematically study how statistical biases within in-context examples propagate to the distribution of synthetic tabular data, showing that even mild in-context biases lead to global statistical distortions. We further introduce an adversarial scenario where a malicious contributor can inject bias into the synthetic dataset via a subset of in-context examples, ultimately compromising the fairness of downstream classifiers for a targeted and protected subgroup. Our findings led us to define a new vulnerability associated with LLM-based data generation pipelines which rely on in-context prompts within sensitive domains.
Submission Number: 5
Loading