An Expansive Latent Planner for Long-horizon Visual Offline Reinforcement LearningDownload PDF

Published: 13 Jun 2023, Last Modified: 01 Jul 2023RSS-23 LTAMP SpotlightReaders: Everyone
Keywords: planning, reinforcement learning
Abstract: Sampling-based motion planning algorithms are highly effective in finding global paths in geometrically-complex environments. However, classical approaches, such as RRT, are difficult to scale beyond low-dimensional search spaces and rely on privileged knowledge e.g. about collision detection and underlying state distances. In this work, we take a step towards the integration of sampling-based planning into the reinforcement learning framework to solve sparse-reward control tasks from high-dimensional inputs. Our method, called VELAP, determines sequences of waypoints through sampling-based exploration in a learned state embedding. Unlike other sampling-based techniques, we iteratively expand a tree-based memory of visited latent areas, which is leveraged to explore a larger portion of the latent space for a given contingent of search iterations. We demonstrate state-of-the-art results in learning control from offline data in the context of vision-based manipulation under sparse reward feedback. Our method extends the set of available planning tools in model-based reinforcement learning to include a latent planner that searches for global solutions paths, rather than being bound to a fixed prediction horizon.
1 Reply

Loading