Keywords: Follow-the-Perturbed-Leader, Decoupled exploration and exploitation, Best-of-Both-Worlds
TL;DR: We propose a practically efficient FTPL policy for decoupled MAB that achieves BOBW without convex optimization or resampling.
Abstract: We study the decoupled multi-armed bandit (MAB) problem, where the learner selects one arm for exploration and one arm for exploitation in each round. The loss of the explored arm is observed but not counted, while the loss of the exploited arm is incurred without being observed. We propose a policy within the Follow-the-Perturbed-Leader (FTPL) framework using Pareto perturbations. Our policy achieves (near-)optimal regret regardless of the environment, i.e., Best-of-Both-Worlds (BOBW): constant regret in the stochastic regime, improving upon the optimal bound of the standard MABs, and minimax optimal regret in the adversarial regime. Moreover, our policy avoids both the optimization step required by the previous BOBW policy, Decoupled-Tsallis-INF [Rouyer and Seldin, 2020], and the resampling step that is typically necessary in FTPL. Consequently, it achieves substantial computational improvement, about $20$ times faster than Decoupled-Tsallis-INF, while demonstrating better empirical performance in both regimes.
Submission Number: 89
Loading