Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks

Published: 11 Mar 2024, Last Modified: 15 Mar 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Long-horizon robot learning, hierarchical reinforcement learning, LLMs
TL;DR: We propose a method that enables Language Model guided RL for long-horizon robotics tasks by integrating LLM-planning, vision-based motion planning and RL for low-level control.
Abstract: Large Language Models (LLMs) have been shown to be capable of performing high-level planning for long-horizon robotics tasks, yet existing methods require access to a pre-defined skill library (*e.g.* picking, placing, pulling, pushing, navigating). However, LLM planning does not address how to design or learn those behaviors, which remains challenging particularly in long-horizon settings. Furthermore, for many tasks of interest, the robot needs to be able to adjust its behavior in a fine-grained manner, requiring the agent to be capable of modifying *low-level* control actions. Can we instead use the internet-scale knowledge from LLMs for high-level policies, guiding reinforcement learning (RL) policies to efficiently solve robotic control tasks online without requiring a pre-determined set of skills? In this paper, we propose **Plan-Seq-Learn** (PSL): a modular approach that uses motion planning to bridge the gap between abstract language and learned low-level control for solving long-horizon robotics tasks from scratch. We demonstrate that PSL achieves state-of-the-art results on over **25** challenging robotics tasks with up to **10** stages. PSL solves long-horizon tasks from raw visual input spanning four benchmarks at success rates of **over 85%**, out-performing language-based, classical, and end-to-end approaches. Video results and code at https://planseqlearn.github.io/
Submission Number: 40
Loading