Bayesian Meta-Reinforcement Learning with Laplace Variational Recurrent Networks

Published: 01 Aug 2024, Last Modified: 09 Oct 2024EWRL17EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Variational Inference, Bayesian Reinforcement Learning, Meta-Reinforcement Learning, Uncertainty Estimation
TL;DR: We apply the Laplace approximation on a recurrent neural network to see how well non-Bayesian trained agents can estimate uncertainty in their hidden state.
Abstract: Meta-reinforcement learning trains a single reinforcement learning algorithm on a distribution of tasks to quickly generalize to new tasks outside of the training set at test time. From a Bayesian perspective, one can interpret this as performing amortized variational inference on the posterior distribution over training tasks. Among the various meta-reinforcement learning approaches, a common method is to represent this distribution with a point-estimate using a recurrent neural networks. We show how one can augment this point estimate to give full distributions through the Laplace approximation, either at the start of, during, or after learning, without modifying the base model architecture. With our approximation, we are able to estimate distributional statistics (e.g., the entropy) of non-Bayesian agents and observe that point-estimate based methods produce overconfident estimators while not satisfying consistency. Furthermore, when comparing our approach to full-distribution based learning of the task posterior, we found our method to perform on par with variational inference baselines despite being simpler to implement.
Submission Number: 37
Loading