Beyond the Best: Distribution Functional Estimation in Infinite-Armed BanditsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 19 Jan 2023, 06:17NeurIPS 2022 AcceptReaders: Everyone
Keywords: Multi-armed bandits, information-theoretic lower bounds, algorithm design, functional estimation
TL;DR: We construct unified algorithms for offline and online estimation of a class of distribution functionals in the infinite-armed bandit setting, and provide matching upper and lower bounds.
Abstract: In the infinite-armed bandit problem, each arm's average reward is sampled from an unknown distribution, and each arm can be sampled further to obtain noisy estimates of the average reward of that arm. Prior work focuses on the best arm, i.e. estimating the maximum of the average reward distribution. We consider a general class of distribution functionals beyond the maximum and obtain optimal sample complexities in both offline and online settings. We show that online estimation, where the learner can sequentially choose whether to sample a new or existing arm, offers no advantage over the offline setting for estimating the mean functional, but significantly reduces the sample complexity for other functionals such as the median, maximum, and trimmed mean. We propose unified meta algorithms for the online and offline settings and derive matching lower bounds using different Wasserstein distances. For the special case of median estimation, we identify a curious thresholding phenomenon on the indistinguishability between Gaussian convolutions with respect to the noise level, which may be of independent interest.
Supplementary Material: pdf
15 Replies

Loading