Fusing Reward and Dueling Feedback in Stochastic Bandits

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: Effective algorithms fusing reward and dueling feedback in stochastic bandits.
Abstract: This paper investigates the fusion of absolute (reward) and relative (dueling) feedback in stochastic bandits, where both feedback types are gathered in each decision round. We derive a regret lower bound, demonstrating that an efficient algorithm may incur only the smaller among the reward and dueling-based regret for each individual arm. We propose two fusion approaches: (1) a simple elimination fusion algorithm that leverages both feedback types to explore all arms and unifies collected information by sharing a common candidate arm set, and (2) a decomposition fusion algorithm that selects the more effective feedback to explore the corresponding arms and randomly assigns one feedback type for exploration and the other for exploitation in each round. The elimination fusion experiences a suboptimal multiplicative term of the number of arms in regret due to the intrinsic suboptimality of dueling elimination. In contrast, the decomposition fusion achieves regret matching the lower bound up to a constant under a common assumption. Extensive experiments confirm the efficacy of our algorithms and theoretical results.
Lay Summary: Online recommendation platforms—think movie ratings on IMDb or hotel reviews on TripAdvisor—gather two kinds of user feedback: **absolute feedback**, where you assign a score to a single item (“I give this movie 4 stars”), and **relative feedback**, where you compare two items (“I prefer Movie A over Movie B”). Rather than treating these two feedback channels separately, our work asks: *Can we combine them to make better future recommendations?* This question is also highly relevant when training large language models, where multiple signal types can guide learning. We propose two simple yet powerful ways to fuse absolute and relative feedback: 1. **ElimFusion** removes any recommendation that receives a negative judgment in either feedback mode. If a movie scores poorly or loses a head‑to‑head comparison, it’s eliminated from consideration—letting us focus only on items with consistently positive signals. 2. **DecoFusion** splits potential recommendations into two groups: one optimized for absolute scores and another for pairwise comparisons. By tailoring how we process each group, we capture the strengths of both feedback types without forcing them into a single metric. Across a range of experiments, both ElimFusion and DecoFusion outperform methods that rely on just one kind of feedback. Our results show that collecting and intelligently combining absolute and relative preferences can significantly boost recommendation quality. Beyond recommendation systems, these fusion strategies open up new opportunities for any machine‑learning task that benefits from multiple forms of human or automated feedback.
Primary Area: General Machine Learning->Online Learning, Active Learning and Bandits
Keywords: Relative Feedback, Stochastic Bandits, Dueling Bandits
Submission Number: 3929
Loading