LLM-Powered Preference Elicitation in Combinatorial Assignment

20 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: mechanism design, preference elicitation, combinatorial assignment, course allocation, LLM proxies
TL;DR: LLMs serve as one-shot proxies for student preferences, reducing elicitation effort while improving course-allocation outcomes.
Abstract: We study the potential of large language models (LLMs) as proxies for humans to simplify preference elicitation (PE) in combinatorial assignment, where the bundle space grows exponentially with the number of items, making full elicitation infeasible beyond small domains. Traditional elicitation methods sacrifice expressiveness and require agents to translate their preferences into rigid, unnatural formats, leading to under-reporting and welfare loss. Iterative, machine-learning-based elicitation schemes relax these constraints, but incur the cognitive burden of repeated, highly structured interaction. LLMs offer a one-shot alternative with reduced human effort. With the well-studied course-allocation problem as a model, we propose a framework for LLM proxies that can work in tandem with SOTA ML-powered preference elicitation schemes. We experimentally evaluate the efficiency of LLM proxies against human queries and investigate the model capabilities required for success. We find that our framework improves allocative efficiency by up to 20%, and these results are robust across different LLMs and to differences in quality and accuracy of reporting.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 22918
Loading