Probabilistic Inference in Reinforcement Learning Done Right

Published: 21 Sept 2023, Last Modified: 11 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Reinforcement learning, Bayesian inference, Exploration
TL;DR: We derive a principled Bayesian approach for 'RL as inference' that leads to efficient exploration.
Abstract: A popular perspective in Reinforcement learning (RL) casts the problem as probabilistic inference on a graphical model of the Markov decision process (MDP). The core object of study is the probability of each state-action pair being visited under the optimal policy. Previous approaches to approximate this quantity can be arbitrarily poor, leading to algorithms that do not implement genuine statistical inference and consequently do not perform well in challenging problems. In this work, we undertake a rigorous Bayesian treatment of the posterior probability of state-action optimality and clarify how it flows through the MDP. We first reveal that this quantity can indeed be used to generate a policy that explores efficiently, as measured by regret. Unfortunately, computing it is intractable, so we derive a new variational Bayesian approximation yielding a tractable convex optimization problem and establish that the resulting policy also explores efficiently. We call our approach VAPOR and show that it has strong connections to Thompson sampling, K-learning, and maximum entropy exploration. We conclude with some experiments demonstrating the performance advantage of a deep RL version of VAPOR.
Submission Number: 13029
Loading