Supporting High-Stakes Decision Making Through Interactive Preference Elicitation in the Latent Space

ICLR 2026 Conference Submission17607 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayesian optimization, preference elicitation, autoencoder, LLM
Abstract: High-stakes, infrequent consumer decisions, such as housing selection, challenge conventional recommender systems due to sparse interaction signals, heterogeneous multi-criteria objectives, and high-dimensional feature spaces. This work presents an interactive preference elicitation framework that couples preferential Bayesian optimization (PBO) with two complementary components: (i) large language models (LLMs) that interpret natural language input to produce personalized probabilistic priors over feature utility weights to mitigate cold start, and (ii) an autoencoder (AE)-based latent representation that reduces effective dimensionality for sample-efficient exploration. The framework learns a latent utility function from user pairwise comparisons observed and integrated in real-time. We evaluate the developed method on rental real estate datasets from two major European cities. The results show that executing PBO in an AE latent space improves final pairwise ranking accuracy by 12%. For LLM-based preference prior generation, we find that direct, LLM-driven weight specification is outperformed by a static prior, while probabilistic weight priors that use LLMs only to rank feature importance achieve 25% better pairwise accuracy on average than a direct approach.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 17607
Loading