EMPOWERING CONTINUAL ROBOT LEARNING THROUG GUIDED SKILL ACQUISITION WITH LANGUAGE MODELS

Published: 05 Apr 2024, Last Modified: 21 Apr 2024VLMNM 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: RL, LLM, TAMP, Continuous Learning, Lifelong Learning, Curriculum Learning, Robotic Learning
TL;DR: LEAGUE++ is a framework that integrates LLMs with DRL and TAMP for continuous and lifelong skill learning in robots.
Abstract: To support daily human tasks, robots must handle complex, long-term tasks and continuously learn new skills. Deep reinforcement learning (DRL) offers potential for finegrained skill learning but faces challenges with long-horizon tasks and relies heavily on human-defined rewards. Task and Motion Planning (TAMP) excel at long-horizon tasks but require tailored domain-specific skills, limiting practicality. To address these challenges, we developed LEAGUE++, integrating Large Language Models (LLMs) with TAMP and DRL for continuous skill learning. Our framework automates task decomposition, operator creation, and dense reward generation for efficient skill acquisition. LEAGUE++ maintains a symbolic skill library and utilizes existing models for warmstarting training to facilitate new skill learning. Our method outperforms baselines across four challenging simulated task domains and demonstrates skill reuse to expedite learning in new domains. Video results available at: https://sites.google.com/view/continuallearning.
Submission Number: 11
Loading