PALMER: Perception - Action Loop with Memory for Long-Horizon PlanningDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Jan 2023, 12:38NeurIPS 2022 AcceptReaders: Everyone
Keywords: representation learning, memory, planning, reinforcement learning, statistical contingencies
TL;DR: Using action-informed perceptual representations, we develop a memory-based model of the environment that enables planning for long horizon tasks.
Abstract: To achieve autonomy in a priori unknown real-world scenarios, agents should be able to: i) act from high-dimensional sensory observations (e.g., images), ii) learn from past experience to adapt and improve, and iii) be capable of long horizon planning. Classical planning algorithms (e.g. PRM, RRT) are proficient at handling long-horizon planning. Deep learning based methods in turn can provide the necessary representations to address the others, by modeling statistical contingencies between observations. In this direction, we introduce a general-purpose planning algorithm called PALMER that combines classical sampling-based planning algorithms with learning-based perceptual representations. For training these perceptual representations, we combine Q-learning with contrastive representation learning to create a latent space where the distance between the embeddings of two states captures how easily an optimal policy can traverse between them. For planning with these perceptual representations, we re-purpose classical sampling-based planning algorithms to retrieve previously observed trajectory segments from a replay buffer and restitch them into approximately optimal paths that connect any given pair of start and goal states. This creates a tight feedback loop between representation learning, memory, reinforcement learning, and sampling-based planning. The end result is an experiential framework for long-horizon planning that is significantly more robust and sample efficient compared to existing methods.
Supplementary Material: zip
20 Replies