On Abruptly-Changing and Slowly-Varying Multiarmed Bandit ProblemsOpen Website

12 May 2023 (modified: 12 May 2023)OpenReview Archive Direct UploadReaders: Everyone
Abstract: We study the non-stationary stochastic multi-armed bandit (MAB) problem and propose two generic algorithms, namely, Limited Memory Deterministic Sequencing of Exploration and Exploitation (LM-DSEE) and Sliding-Window Upper Confidence Bound# (SW-UCB#). We rigorously analyze these algorithms in abruptly-changing and slowly-varying environments and characterize their performance. We show that the expected cumulative regret for these algorithms in either of the environments is upper bounded by sublinear functions of time, i.e., the time average of the regret asymptotically converges to zero. We complement our analysis with numerical illustrations.
0 Replies

Loading