Multi-Fidelity Best-Arm IdentificationDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 10 Oct 2022, 15:05NeurIPS 2022 AcceptReaders: Everyone
Keywords: best-arm identification, fixed-confidence, multi-fidelity, multi-armed bandit
TL;DR: We study the multi-fidelity variant of best-arm identification, where observing the outcome of a given arm is expensive, but multiple and biased approximations (i.e. fidelity) are available at a cheaper cost.
Abstract: In several real-world applications, a learner has access to multiple environment simulators, each with a different precision (e.g., simulation accuracy) and cost (e.g., computational time). In such a scenario, the learner faces the trade-off between selecting expensive accurate simulators or preferring cheap imprecise ones. We formalize this setting as a multi-fidelity variant of the stochastic best-arm identification problem, where querying the original arm is expensive, but multiple and biased approximations (i.e., fidelities) are available at lower costs. The learner's goal, in this setting, is to sequentially choose which simulator to query in order to minimize the total cost, while guaranteeing to identify the optimal arm with high probability. We first derive a lower bound on the identification cost, assuming that the maximum bias of each fidelity is known to the learner. Then, we propose a novel algorithm, Iterative Imprecise Successive Elimination (IISE), which provably reduces the total cost w.r.t. algorithms that ignore the multi-fidelity structure and whose cost complexity upper bound mimics the structure of the lower bound. Furthermore, we show that the cost complexity of IISE can be further reduced when the agent has access to a more fine-grained knowledge of the error introduced by the approximators. Finally, we numerically validate IISE, showing the benefits of our method in simulated domains.
Supplementary Material: zip
20 Replies