LEAGUE++: EMPOWERING CONTINUAL ROBOT LEARNING THROUGH GUIDED SKILL ACQUISITION WITH LARGE LANGUAGE MODELS

Published: 05 Mar 2024, Last Modified: 12 May 2024ICLR 2024 AGI Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RL, LLM, TAMP, Continuous Learning, Lifelong Learning, Curriculum Learning, Robotic Learning
TL;DR: LEAGUE++ is a framework that integrates LLMs with DRL and TAMP for continuous and lifelong skill learning in robots.
Abstract:

To support daily human tasks, robots need to tackle intricate, long-term tasks and continuously acquire new skills to handle new problems. Deep reinforcement learning (DRL) offers potential for learning fine-grained skills but relies heavily on human-defined rewards and faces challenges with long-horizon tasks. Task and Motion Planning (TAMP) are adept at handling long-horizon tasks but often need tailored domain-specific skills, resulting in practical limitations and inefficiencies. To address these challenges, we developed LEAGUE++, a framework that lever- ages Large Language Models (LLMs) to harmoniously integrate TAMP and DRL for continuous skill learning in long-horizon tasks. Our framework achieves auto- matic task decomposition, operator creation, and dense reward generation for ef- ficiently acquiring the desired skills. To facilitate new skill learning, LEAGUE++ maintains a symbolic skill library and utilizes the existing model from semantic- related skill to warm start the training. Our method, LEAGUE++, demonstrates superior performance compared to baselines across four challenging simulated task domains. Furthermore, we demonstrate the ability to reuse learned skills to expedite learning in new task domains.

Submission Number: 39
Loading