Track: Full track
Keywords: LLM, Hierarchical Reinforcement Learning, Human/Agent Interaction
TL;DR: To establish a language to communicate with an emerging symbolic representation, we integrate large language models with hierarchical reinforcement learning that improves task planning and abstraction using human-like spatial reasoning.
Abstract: Hierarchical Reinforcement Learning (HRL) breaks down complex tasks into manageable subtasks, but faces challenges with efficiency and generalization in high-dimensional, open-ended environments. Human In The Loop approaches offer a potential solution to these limitations. In this work, we propose the integration of large language models (LLMs) with HRL, leveraging LLMs' natural language and reasoning capabilities and study how to bridge the gap between human instructions and HRL's learned abstract representations. By translating human demonstrations into actionable reinforcement learning signals, LLMs can improve task abstraction and planning within HRL. Our approach builds upon the Spatial-Temporal Abstraction via Reachability (STAR) algorithm, using a LLM to optimize the hierarchical planning process. Empirical results obtained on continuous control tasks illustrate the potential of LLMs to enhance HRL particularly in environments requiring spatial reasoning and hierarchical control.
Submission Number: 23
Loading