LLM Augmented Hierarchical Agents

Published: 21 Oct 2023, Last Modified: 06 Nov 2023LangRob @ CoRL 2023 PosterEveryoneRevisionsBibTeX
Keywords: LLM, Reinforcement Learning, Hierarchical Reinforcement Learning, Hierarchical Agents, Language Conditioned RL
TL;DR: Using LLMs to guide exploration in hierarchical agents by extracting common sense priors
Abstract: Solving long-horizon, temporally-extended tasks using Reinforcement Learning (RL) is challenging, compounded by the common practice of learning without prior knowledge (or tabula rasa learning). Humans can generate and execute plans with temporally-extended actions and quickly learn to perform new tasks because we almost never solve problems from scratch. We want autonomous agents to have this same ability. Recently, LLMs have been shown to encode a tremendous amount of knowledge about the world and to perform impressive in-context learning and reasoning. However, using LLMs to solve real world tasks is hard because they are not grounded in the current task. In this paper we exploit the planning capabilities of LLMs while using RL to provide learning from the environment, resulting in a hierarchical agent that uses LLMs to solve long-horizon tasks. Instead of completely relying on LLMs, they guide a high-level policy, making learning significantly more sample efficient. This approach is evaluated in simulation environments such as MiniGrid, SkillHack, and Crafter, and on a real robot arm in block manipulation tasks. We show that agents trained using our approach outperform other baselines methods and, once trained, don't need access to LLMs during deployment.
Submission Number: 41
Loading