Interactive Robot Learning from Verbal Correction

Published: 21 Oct 2023, Last Modified: 29 Oct 2023LangRob @ CoRL 2023 PosterEveryoneRevisionsBibTeX
Keywords: Robot Learning, Human-in-the-Loop, LLM for Robotics, Language Correction
TL;DR: A LLm-based learning system that allows everyday users to teach a robot using verbal corrections when the robot makes mistakes.
Abstract: The ability for robots to learn and refine behavior during deployment has become ever more important as we design them to operate in less structured environments like households. Recently, researchers have proposed various robotic systems that can follow verbal commands or corrections through with the help of vision-language models (VLMs) and large language models (LLMs). In this work, we design a new LLM-based learning system, LoCaL, that allows everyday users to teach a robot using verbal corrections when the robot makes mistakes, e.g., by saying sentences such as “Stop what you’re doing. You should move closer to the cup.” A key feature of LoCaL is its ability to update the robot’s visuomotor neural policy based on the verbal feedback to avoid repeating mistakes in the future. We demonstrate the efficacy of our design in experiments where a user teaches a robotic manipulator to perform tasks involving household objects on a table. Videos and more results are at https://sites.google.com/view/local2023/.
Submission Number: 36
Loading