Keywords: Multiarm bandits, Liptschitz condition, Clustering, Adversarial, Online optimization
TL;DR: A hierarchical bandit algorithm for nonstochastic multi-armed bandits with oblivious Lipschitz adversaries in metric spaces that is practical and achieves improved regret under favorable conditions and demonstrates superior empirical performance.
Abstract: Motivated by dynamic parameter optimization in finite, but large action (configurations) spaces, this work studies the nonstochastic multi-armed bandit (MAB) problem in metric action spaces with oblivious Lipschitz adversaries.
We propose ABoB, a hierarchical Adversarial Bandit over Bandits algorithm that
clusters similar configurations to "virtual arms''. In turn, it uses state-of-the-art existing "flat'' MAB algorithms in each hierarchy to exploit local structures and adapt to changing environments.
We prove that in the worst-case scenario, such clustering approach cannot hurt too much and ABoB guarantees a standard worst-case regret bound of $\mathcal{O}(k^{\frac{1}{2}}T^{\frac{1}{2}})$, where $T$ is the number of rounds and $k$ is the number of arms, matching the traditional flat approach.
However, under favorable conditions related to the algorithm properties, clusters properties, and certain Lipschitz conditions, the regret bound can be improved to $\mathcal{O}(k^{\frac{1}{4}}T^{\frac{1}{2}})$.
Simulations and experiments on a real storage system demonstrate that ABoB can be made practical using standard algorithms like EXP3 and Tsallis-INF. \EXP32 achieves lower regret and faster convergence than the flat method, up to 50\% improvement in known previous setups, nonstochastic and stochastic, as well as in our settings.
Primary Area: optimization
Submission Number: 11390
Loading