Reward Maximization for Pure Exploration: Minimax Optimal Good Arm Identification for Nonparametric Multi-Armed Bandits

Published: 22 Jan 2025, Last Modified: 11 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We show that for the good-arm identification problem, regret-optimal sampling schemes achieve optimal stopping times when paired with our novel anytime valid sequential testing methods.
Abstract:

In multi-armed bandits, reward maximization and pure exploration are often at odds with each other. The former focuses on exploiting arms with the highest means, while the latter may require constant exploration across all arms. In this work, we focus on good arm identification (GAI), a pure exploration objective that aims to label arms with means above a threshold as quickly as possible. We show that GAI can be efficiently solved by combining a reward-maximizing sampling algorithm with a novel nonparametric anytime-valid sequential test for labeling arm means.

We begin by presenting the theoretical guarantees of our proposed sequential test. Under nonparametric assumptions, our test ensures strict error control and asymptotically achieves the minimax optimal e-power, a notion of power for anytime-valid tests. Building on this, we propose an algorithm for GAI by pairing regret-minimizing sampling schemes with our sequential test as a stopping criterion. We show that this approach achieves minimax optimal stopping times for labeling arms with means above a threshold, under an error probability constraint δ. Our empirical results validate our approach beyond the minimax setting, reducing the expected number of samples for all stopping times by at least 50% across both synthetic and real-world settings.

Submission Number: 758
Loading