Eliciting User Preferences for Personalized Multi-Objective Decision Making through Comparative Feedback

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: preference learning, algorithms, linear model, Markov decision processes, learning theory, multi-objective decision making, preference elicitation
TL;DR: Preference learning approach for multi-objective decision making - algorithms and theoretical guarantees
Abstract: In this work, we propose a multi-objective decision making framework that accommodates different user preferences over objectives, where preferences are learned via policy comparisons. Our model consists of a known Markov decision process with a vector-valued reward function, with each user having an unknown preference vector that expresses the relative importance of each objective. The goal is to efficiently compute a near-optimal policy for a given user. We consider two user feedback models. We first address the case where a user is provided with two policies and returns their preferred policy as feedback. We then move to a different user feedback model, where a user is instead provided with two small weighted sets of representative trajectories and selects the preferred one. In both cases, we suggest an algorithm that finds a nearly optimal policy for the user using a number of comparison queries that scales quasilinearly in the number of objectives.
Supplementary Material: pdf
Submission Number: 8884
Loading