Planning Goals for ExplorationDownload PDF

Published: 13 Dec 2022, Last Modified: 03 Jul 2024CoRL 2022 Workshop Long-Horizon Planning OralReaders: Everyone
Keywords: model-based reinforcement learning, exploration, goal-conditioned reinforcement learning, unsupervised reinforcement learning, planning
TL;DR: We use world models to generate goals for exploration.
Abstract: Dropped into an unknown environment, what should an agent do to quickly learn about the environment and how to accomplish diverse tasks within it? We address this question within the goal-conditioned reinforcement learning paradigm, by identifying how the agent should set its goals at training time to maximize exploration. We propose "Planning Exploratory Goals'' (PEG), a method that sets goals for each training episode to directly optimize an intrinsic exploration reward. PEG first chooses goal commands such that the agent's goal-conditioned policy, at its current level of training, will end up in states with high exploration potential. It then launches an exploration policy starting at those promising states. To enable this direct optimization, PEG learns world models and adapts sampling-based planning algorithms to "plan goal commands''. In challenging simulated robotics environments including a multi-legged ant robot in a maze, and a robot arm on a cluttered tabletop, PEG exploration enables more efficient and effective training of goal-conditioned policies relative to baselines and ablations. Our ant successfully navigates a long maze, and the robot arm successfully builds a stack of three blocks upon command. Project website: https://sites.google.com/view/peg-corl22
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/planning-goals-for-exploration/code)
0 Replies

Loading