PALMER: Perception-Action Loop with Memory for Long-Horizon PlanningDownload PDF

Published: 13 Dec 2022, Last Modified: 16 May 2023CoRL 2022 Workshop Long-Horizon Planning PosterReaders: Everyone
Keywords: Representation Learning, Memory, Planning, Reinforcement Learning
TL;DR: We describe a long-horizon planning method that combines learning-based perceptual representations with sampling-based planning algorithms. It operates by retrieving previously observed trajectory segments from a replay buffer and restitching them.
Abstract: To achieve autonomy in a priori unknown real-world scenarios, agents should be able to: i) act from high-dimensional sensory observations (e.g., images), ii) learn from past experience to adapt and improve, and iii) be capable of long horizon planning. Classical planning algorithms (e.g. PRM, RRT) are proficient at handling long horizon planning. Deep learning based methods in turn can provide the necessary representations to address the others, by modeling statistical contingencies between observations. In this direction, we introduce a general-purpose planning algorithm called PALMER that combines classical sampling-based planning algorithms (e.g., PRM, RRT) with learning-based perceptual representations. For training these perceptual representations, we combine Q-learning with contrastive representation learning to create a latent space where the distance between the embeddings of two states captures how easily an optimal policy can traverse between them. For planning with these perceptual representations, we re-purpose classical sampling-based planning algorithms to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths that connect any given pair of start and goal states. This creates a tight feedback loop between representation learning, memory, reinforcement learning, and sampling-based planning. Please visit our website for further information:
0 Replies