Approximate information maximization for bandit games

Published: 01 Aug 2024, Last Modified: 09 Oct 2024EWRL17EveryoneRevisionsBibTeXCC BY 4.0
Keywords: bandits, information maximization, Bayesian inference, physics-based approach
TL;DR: We propose a novel bandit strategy based on information maximization principle, prove its theoretical optimality for multi-armed Gaussian bandits and show its strong empirical performance and applicability to various bandit problems.
Abstract: Entropy maximization and free energy minimization are general physics principles for modeling dynamic systems. Notable examples include modeling decision-making within the brain using the free-energy principle, optimizing the accuracy-complexity trade-off when accessing hidden variables with the information bottleneck principle (Tishby et al. 2000), and navigation in random environments using information maximization (Vergassola et al. 2007). Building on this principle, we propose a new class of bandit algorithms that maximize an approximation to the information of a key variable within the system. To this end, we develop an approximated, analytical physics-based representation of the entropy to forecast the information gain of each action and greedily choose the one with the largest information gain. This method yields strong performances in classical bandit settings. Motivated by its empirical success, we prove its asymptotic optimality for the multi-armed bandit problem with Gaussian rewards. Since it encompasses the system's properties in a single, global functional, this approach can be efficiently adapted to more complex bandit settings. This calls for further investigation of information maximization approaches for multi-armed bandit problems.
Supplementary Material: zip
Submission Number: 22
Loading