Fast Exploration with Simplified Models and Approximately Optimistic Planning in Model Based Reinforcement Learning

Ramtin Keramati, Jay Whang, Patrick Cho, Emma Brunskill

Sep 27, 2018 ICLR 2019 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Humans learn to play video games significantly faster than the state-of-the-art reinforcement learning (RL) algorithms. People seem to build simple models that are easy to learn to support planning and strategic exploration. Inspired by this, we investigate two issues in leveraging model-based RL for sample efficiency. First we investigate how to perform strategic exploration when exact planning is not feasible and empirically show that optimistic Monte Carlo Tree Search outperforms posterior sampling methods. Second we show how to learn simple deterministic models to support fast learning using object representation. We illustrate the benefit of these ideas by introducing a novel algorithm, Strategic Object Oriented Reinforcement Learning (SOORL), that outperforms state-of-the-art algorithms in the game of Pitfall! in less than 50 episodes.
  • Keywords: Reinforcement Learning, Strategic Exploration, Model Based Reinforcement Learning
  • TL;DR: We studied exploration with imperfect planning and used object representation to learn simple models and introduced a new sample efficient RL algorithm that achieves state of the art results on Pitfall!
0 Replies

Loading