Language Model Fine-Tuning on Scaled Survey Data for Predicting Distributions of Public Opinions

ACL ARR 2025 February Submission2879 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) present novel opportunities in public opinion research by predicting survey responses in advance during the early stages of survey design. Prior methods steer LLMs via descriptions of subpopulations as LLMs' input prompt, yet such prompt engineering approaches have struggled to faithfully predict the distribution of survey responses from human subjects. In this work, we propose directly fine-tuning LLMs to predict response distributions by leveraging unique structural characteristics of survey data. To enable fine-tuning, we curate SubPOP, a significantly scaled dataset of 3,362 questions and 70K subpopulation-response pairs from well-established public opinion surveys. We show that fine-tuning on SubPOP greatly improves the match between LLM predictions and human responses across various subpopulations, reducing the discrepancy in distribution over option choices by up to 46% compared to baselines, and achieves strong generalization to out-of-distribution data. Our findings highlight the potential of survey-based fine-tuning to improve predictions about opinions of real-world populations and therefore enable more efficient survey designs.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Computational Social Science and Cultural Analytics, NLP Applications
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 2879
Loading