Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement LearningDownload PDF

12 Jun 2020 (modified: 13 Jul 2020)ICML 2020 Workshop LifelongML Blind SubmissionReaders: Everyone
  • Student First Author: Yes
  • Keywords: meta-learning, reinforcement learning, exploration
  • Abstract: We seek to efficiently learn by leveraging shared structure between different tasks and environments. For example, cooking is similar in different kitchens, even though the ingredients may change location. In principle, meta-reinforcement learning approaches can exploit this shared structure, but in practice, they fail to adapt to new environments when adaptation requires targeted exploration (e.g., exploring the cabinets to find ingredients in a new kitchen). We show that existing approaches fail due to a chicken-and-egg problem: learning what to explore requires knowing what information is critical for solving the task, but learning to solve the task requires already gathering this information via exploration. For example, exploring to find the ingredients only helps a robot prepare a meal if it already knows how to cook, but the robot can only learn to cook if it already knows where the ingredients are. To address this, we propose a new exploration objective (DREAM), based on identifying key information in the environment, independent of this information will exactly be used solve the task. By decoupling exploration from task execution, DREAM explores and consequently adapts to new environments requiring no reward signal when the task is specified via an instruction. Empirically, DREAM scales to more complex problems, such as sparse-reward 3D visual navigation, while existing approaches fail from insufficient exploration.
  • TL;DR: Adapting to new tasks during meta-testing without requiring rewards
0 Replies