Who to Ask and What to Ask: Adaptive Multi-Turn Group Elicitation with LLMs

Published: 02 Mar 2026, Last Modified: 10 Mar 2026ICLR 2026 Workshop AIMSEveryoneRevisionsCC BY 4.0
Keywords: Adaptive Survey Design, Group Adaptive Elicitation, Uncertainty Quantification, Large Language Models
TL;DR: We propose a population-aware adaptive elicitation framework that jointly selects questions and respondents to improve group-level prediction under limited survey budgets.
Abstract: Eliciting information to reduce uncertainty about latent group-level properties is a central problem in collective assessment, preference modeling, and opinion aggregation, and is especially important in survey-based studies. This process can be viewed as a budgeted information elicitation mechanism, where limited querying resources must be strategically allocated. While natural language interactions provide a flexible interface, existing methods typically rely on fixed questionnaires and static respondent sets, and do not adapt to partial or missing responses across rounds. To address this gap, we study adaptive information elicitation through multi-turn interactions between a large language model and a group of individuals, where both queries and respondents are adaptively selected to infer latent group properties. We propose a theoretically grounded framework that, at each round, jointly selects a query and a subset of respondents based on previously observed responses to efficiently reduce uncertainty about a target latent quantity (e.g., group-level political inclination). Motivated by practical survey constraints, such as limited questions and costly participation, our strategy maximizes information gain under a fixed budget. To handle missing and incomplete responses, we combine graph neural networks for aggregating/imputing partial group information with an information-theoretic criterion that guides per-round selection. Across three real-world opinion datasets, we achieve consistent improvements in population-level response prediction under constrained budgets, including over a 12% relative gain on CES at a 10% respondent budget.
Track: Long Paper
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 61
Loading