TL;DR: Uncertainty Quantification for LLM-Based Survey Simulations
Abstract: We investigate the use of large language models (LLMs) to simulate human responses to survey questions, and perform uncertainty quantification to gain reliable insights. Our approach converts imperfect, LLM-simulated responses into confidence sets for population parameters of human responses, addressing the distribution shift between the simulated and real populations. A key innovation lies in determining the optimal number of simulated responses: too many produce overly narrow confidence sets with poor coverage, while too few yield excessively loose estimates. To resolve this, our method adaptively selects the simulation sample size, ensuring valid average-case coverage guarantees. It is broadly applicable to any LLM, irrespective of its fidelity, and any procedure for constructing confidence sets. Additionally, the selected sample size quantifies the degree of misalignment between the LLM and the target human population. We illustrate our method on real datasets and LLMs.
Lay Summary: Researchers and companies increasingly use large language models (LLMs) to simulate human responses to surveys, in economics, social science, and market research. An LLM can generate hundreds of responses in minutes, yet those synthetic responses rarely match real human opinions perfectly. How, then, can we trust these simulations to give a realistic picture of what actual people think?
We tackle this problem by converting imperfect LLM simulations into a reliable confidence interval for human responses. Our method identifies a simulation sample size under which the LLM’s bias is naturally covered by the confidence interval. This sample size reveals how many humans the LLM effectively represents, and measures how well the LLM simulations align with real human responses. Importantly, our method works for any LLM, no matter how advanced or imperfect.
Our real-data experiments show that for social opinion surveys, existing LLMs’ responses represent at most 60 randomly selected people in the general U.S. population, while for middle-school math questions, the LLMs can barely mimic the responses from 10 real students.
Link To Code: https://github.com/yw3453/uq-llm-survey-simulation
Primary Area: General Machine Learning->Evaluation
Keywords: synthetic data, large language models, uncertainty quantification, simulation
Submission Number: 14770
Loading