Combinatorial Reinforcement Learning with Preference Feedback

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: We study combinatorial reinforcement learning with multinomial logistic (MNL) preference feedback and propose a computationally efficient algorithm that establishes the first regret bound in this framework.
Abstract: In this paper, we consider combinatorial reinforcement learning with preference feedback, where a learning agent sequentially offers an action—an assortment of multiple items—to a user, whose preference feedback follows a multinomial logistic (MNL) model. This framework allows us to model real-world scenarios, particularly those involving long-term user engagement, such as in recommender systems and online advertising. However, this framework faces two main challenges: (1) the unknown value of each item, unlike traditional MNL bandits that only address single-step preference feedback, and (2) the difficulty of ensuring optimism while maintaining tractable assortment selection in the combinatorial action space with unknown values. In this paper, we assume a contextual MNL preference model, where the mean utilities are linear, and the value of each item is approximated by a general function. We propose an algorithm, MNL-VQL, that addresses these challenges, making it both computationally and statistically efficient. As a special case, for linear MDPs (with the MNL preference feedback), we establish the first regret lower bound in this framework and show that MNL-VQL achieves near-optimal regret. To the best of our knowledge, this is the first work to provide statistical guarantees in combinatorial RL with preference feedback.
Lay Summary: Imagine a movie app that shows you a set of films—you pick one, and that’s all the app knows. To keep you engaged in the long run, it needs to learn your preferences from such limited feedback and choose better sets of movies over time. We study this problem using a realistic model of how people make choices and introduce an algorithm called MNL-VQL. It learns efficiently from repeated user interactions, selecting sets of items that not only match current preferences but also help the system learn faster for future decisions. Our method is the first to offer both practical efficiency and strong theoretical guarantees in reinforcement learning with this kind of feedback, helping AI systems make smarter long-term decisions.
Primary Area: Theory->Reinforcement Learning and Planning
Keywords: Reinforcement Learning, Multinomial Logistic, Function Approximation
Submission Number: 10720
Loading