Keywords: Large Language Models, Hierarchical Imitation Learning
Abstract: Hierarchical Imitation Learning (HIL) is effective for long-horizon decision-making, but it often requires extensive expert demonstrations and precise supervisory labels. In this work, we introduce SEAL, a novel framework that leverages the semantic and world knowledge embedded in Large Language Models (LLMs) to autonomously define sub-goal spaces and pre-label states with semantically meaningful sub-goal representations, without requiring prior task hierarchy knowledge. SEAL utilizes a dual-encoder architecture that combines LLM-guided supervised sub-goal learning with unsupervised Vector Quantization (VQ) to enhance the robustness of sub-goal representations. Additionally, SEAL incorporates a transition-augmented low-level planner, which improves adaptation to sub-goal transitions. Our experimental results demonstrate that SEAL outperforms state-of-the-art HIL and LLM-based planning approaches, particularly when working with small expert datasets and complex long-horizon tasks.
Submission Number: 80
Loading