Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant ExpertsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: Bayesian Residual Reinforcement Learning, Residual Reinforcement Learning, Bayes Policy Optimization
TL;DR: We propose a scalable Bayesian Reinforcement Learning algorithm that learns a Bayesian correction over an ensemble of clairvoyant experts to solve problems with complex latent rewards and dynamics.
Abstract: Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people. We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces. We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve. We split the challenge into two simpler components. First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy. Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty. Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models. BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.
Original Pdf: pdf
8 Replies

Loading