Constrained Dueling Bandits for Edge Intelligence

Published: 01 Jan 2025, Last Modified: 16 May 2025IEEE Trans. Netw. Sci. Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Bandit is acknowledged as a classical analytic tool for the online decision-making problem under uncertainty, e.g., task assignment for crowdsourcing systems given the unknown reliability of workers. In the conventional setup, an agent selects from a set of arms across rounds to balance the exploitation-exploration tradeoff using quantitive reward feedback. Despite bandits' popularity, their practical implementation may run into concerns like 1) obtaining the quantitive reward is a non-trivial problem, e.g., evaluating workers' completion quality (reward) requires domain experts to set up metrics; 2) mismatch between the budgeted agent and costs for selecting arms, e.g., the crowdsourcing platform (agent) should offer payments (cost) to workers to complete tasks. To address such concerns, 1) we employ dueling bandits to learn the uncertainties via qualitative pairwise comparisons rather than quantitive rewards, e.g., whether a worker performs better on the assigned task than the other; 2) we utilize online control to guarantee a within-budget cost while selecting arms. By integrating online learning and online control, we propose a Constrained Two-Dueling Bandit (CTDB) algorithm. We prove that CTDB achieves a $O(1/V + \sqrt{\log T / T})$ round-averaged regret over the horizon $T$ while keeping a budgeted cost where $V$ is a constant parameter balancing the tradeoff between regret minimization and constraint satisfaction. We conduct extensive simulations with synthetic and real-world datasets to demonstrate the outperformance of CTDB over baselines.
Loading