Continual Robot Skill and Task Learning via Dialogue

Published: 07 May 2025, Last Modified: 07 May 2025ICRA Workshop Human-Centered Robot LearningEveryoneRevisionsBibTeXCC BY 4.0
Workshop Statement: We developed a novel robot agent that can continually learn novel tasks and skills from dialogue interactions with human users. Humans can share information and teach tasks and skills to each other using dialog intuitively. In our work, we leverage this natural communication channel to allow novice human users to teach skills and tasks to robots. We also developed a continual learning algorithms that can learn novel skills incrementally while not forgetting the existing skills. Finally, we conducted a human-subjects study to demonstrate that our system can work with novice human users. We believe such interactive learning robot agent is of the interest of this workshop.
Keywords: Human-Robot Dialogue, Continual Learning, Large Language Model, Robot Learning
TL;DR: We developed a novel framework that enables robots to learn novel skills and tasks from interactions with users, and have designed a human-subject study to show that our framework works with non-expert human users.
Abstract: Interactive robot learning is a challenging problem as the robot is present with human users who expect the robot to learn novel skills to solve novel tasks perpetually with sample efficiency. In this work we present a framework for robots to continually learn visuo-motor robot skills and task relevant information via natural language dialog interactions with human users. Previous approaches either focus on improving the performance of instruction following agents, or task learning with language, or passively learn novel skills or concepts. Instead, we have developed a robot agent that queries unknown skills from real human users, and continually learns these novel skills using only a few robot demonstrations provided by the users. To achieve this goal, we developed a novel continual learning policy Action Chunking Transformer with Low Rank Adaptation (ACT-LoRA), and integrated an existing Large Language Model (LLM) to interact with a human user to perform grounded interactive continual skill learning to solve a task. Our ACT-LoRA policy consistently outperforms GMM-LoRA on continual learning by achieving 40% improvements in the RLBench dataset and 30% improvements in LIBERO dataset on fine-tuned skills. Additionally, we performed an IRB approved human-subjects study in a sandwich making domain to demonstrate that our framework is able to learn novel dynamic skills from non-expert human users and complete tasks using dialog interactions. Our framework achieved an overall 87.5% task completion rate of making novel sandwiches, and a 100% success rate on performing the novel skills learned from human users during the test phase of the study. This result illustrates the promise of a continual learning robot that saves time in the future for users once it has been taught tasks compared to non-learning agents.
Submission Number: 1
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview