Planning With Uncertainty: Deep Exploration in Model-Based Reinforcement LearningDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Reinforcement learning, exploration, uncertainty, planning
TL;DR: Demonstrating deep exploration with MuZero by planning optimistically with epistemic uncertainty
Abstract: Deep model-based reinforcement learning has shown super-human performance in many challenging domains. Low sample efficiency and limited exploration remain however as leading obstacles in the field. In this paper, we demonstrate deep exploration in model-based RL by incorporating epistemic uncertainty into planning trees, circumventing the standard approach of propagating uncertainty through value learning. We evaluate this approach with the state of the art model-based RL algorithm MuZero, and extend its training process to stabilize learning from explicitly-exploratory decisions. Our results demonstrate that planning with uncertainty is able to achieve effective deep exploration with standard uncertainty estimation mechanisms, and with it significant gains in sample efficiency.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
Supplementary Material: zip
12 Replies

Loading