Keywords: stochastic gradient bandit, arbitrary stepsize, global convergence
TL;DR: stochastic gradient bandit algorithm converges to a globally optimal policy almost surely using any constant learning rate
Abstract: We provide a new understanding of the stochastic gradient bandit algorithm by showing that it converges to a globally optimal policy almost surely using \emph{any} constant learning rate. This result demonstrates that the stochastic gradient algorithm continues to balance exploration and exploitation appropriately even in scenarios where standard smoothness and noise control assumptions break down. The proofs are based on novel findings about action sampling rates and the relationship between cumulative progress and noise, and extend the current understanding of how simple stochastic gradient methods behave in bandit settings.
Primary Area: Bandits
Submission Number: 19456
Loading