Trust, but verify: model-based exploration in sparse reward environmentsDownload PDF

Published: 12 Dec 2020, Last Modified: 05 May 2023LMCA2020 PosterReaders: Everyone
Keywords: reinforcement learning, model-based, exploration, on-line planning, imperfect environment model
Abstract: We propose $\textit{trust-but-verify}$ (TBV) mechanism, a new method which uses model uncertainty estimates to guide exploration. The mechanism augments graph search planning algorithms by the capacity to deal with learned model's imperfections. We identify certain type of frequent model errors, which we dub $\textit{false loops}$, and which are particularly dangerous for graph search algorithms in discrete environments. These errors impose falsely pessimistic expectations and thus hinder exploration. We confirm this experimentally and show that TBV can effectively alleviate them. TBV combined with MCTS or Best First Search forms an effective model-based reinforcement learning solution, which is able to robustly solve sparse reward problems.
1 Reply

Loading