Obtaining Hierarchy from Human Instructions: an LLMs-based Approach

Published: 23 Oct 2023, Last Modified: 05 Nov 2023CoRL23-WS-LEAP PosterEveryoneRevisionsBibTeX
Keywords: task hierarchy, large language models, linear temporal logic
TL;DR: Task hierarchy can be straightforwardly obtained from human instructions with the help of LLMs, as inferring hierarchy directly from observations may be challenging.
Abstract: Long-horizon planning in robotics is often hindered by challenges such as uncertainty accumulation, computational complexity, delayed rewards and incomplete information. This work proposes an innovative approach to exploit the inherent task hierarchy from human instructions to aid in planning. Utilizing Large Language Models (LLMs), we propose a two-step approach to translate multi-sentence human instructions into a structured language, Hierarchical Linear Temporal Logic, which serves as an intermediary for planning. Initially, LLMs transform the human instructions into a Hierarchical Task Network (HTN)-like representation, capturing the logical and temporal relations among tasks. Following this, a domain-specific fine-tuning of LLMs or human expert can translate sub-tasks of each task into flat LTL formulas, aggregating them to form hierarchical LTL specifications. These specifications are then leveraged for planning. Our framework not only bridges the gap between human instructions and algorithmic planning but also showcases the potential of LLMs in harnessing human-like hierarchical reasoning to automate complex task planning. Through simulated experiments, we exhibit the efficacy of our approach in generating executable plans from human instructions, fostering a more intuitive and user-friendly robotic assistance in everyday scenarios.
Submission Number: 19
Loading