Keywords: Reinforcement Learning, Bayesian Reinforcement Learning, Bayes-Adaptive Markov Decision Processes, Meta Learning, Meta-Reinforcement Learning
Abstract: Bayesian Reinforcement Learning (BRL) provides a framework for generalisation of Reinforcement Learning (RL) problems from its use of Bayesian task parameters in the transition and reward models. However, classical BRL methods assume known forms of transition and reward models, reducing their applicability in real-world problems. As a result, recent deep BRL methods have started to incorporate model learning, though the use of neural networks directly on the joint data and task parameters requires optimising the Evidence Lower Bound (ELBO). ELBOs are difficult to optimise and may result in indistinctive task parameters, hence compromised BRL policies. To this end, we introduce a novel deep BRL method, $\textbf{G}$eneralised $\textbf{Li}$near Models in deep $\textbf{B}$ayesian $\textbf{RL}$ with Learnable Basis Functions ($\textbf{GLiBRL}$), that enables efficient and accurate learning of transition and reward models, with fully tractable marginal likelihood and Bayesian inference on task parameters and model noises. On challenging MetaWorld ML10 and ML45 benchmarks, GLiBRL improves the success rate of one of the state-of-the-art deep BRL methods, VariBAD, by up to $2.7\times$. Comparing against representative or recent deep BRL / Meta-RL methods, such as MAML, RL$^2$, SDVT, TrMRL and ECET, GLiBRL also demonstrates its low-variance and decent performance consistently.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 7503
Loading