E-MCTS: Deep Exploration in Model-Based Reinforcement Learning by Planning with Epistemic UncertaintyDownload PDF

Published: 20 Jul 2023, Last Modified: 30 Aug 2023EWRL16Readers: Everyone
Keywords: Reinforcement learning, exploration, uncertainty, planning
TL;DR: Achieving deep exploration with MuZero by planning optimistically with epistemic uncertainty
Abstract: One of the most well-studied and highly performing planning approaches used in Model-Based Reinforcement Learning (MBRL) is Monte-Carlo Tree Search (MCTS). Key challenges of MCTS-based MBRL methods remain dedicated deep exploration and reliability in the face of the unknown, and both challenges can be alleviated through principled epistemic uncertainty estimation in the predictions of MCTS. We present two main contributions: First, we develop methodology to propagate epistemic uncertainty in MCTS, enabling agents to estimate the epistemic uncertainty in their predictions. Second, we utilize the propagated uncertainty for a novel deep exploration algorithm by explicitly planning to explore. We incorporate our approach into variations of MCTS-based MBRL approaches with learned and provided models, and empirically show deep exploration through successful epistemic uncertainty estimation achieved by our approach. We compare to a non-planning-based deep-exploration baseline, and demonstrate that planning with epistemic MCTS significantly outperforms non-planning based exploration in the investigated setting.
1 Reply

Loading