Learning Language-Conditioned Robot Behavior from Offline Data and Crowd-Sourced AnnotationDownload PDF

Published: 13 Sept 2021, Last Modified: 05 May 2023CoRL2021 PosterReaders: Everyone
Keywords: Natural Language, Offline Reinforcement Learning, Visuomotor Manipulation
Abstract: We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction. In order to accomplish this, humans need easy and effective ways of specifying tasks to the robot. Goal images are one popular form of task specification, as they are already grounded in the robot's observation space. However, goal images also have a number of drawbacks: they are inconvenient for humans to provide, they can over-specify the desired behavior leading to a sparse reward signal, or under-specify task information in the case of non-goal reaching tasks. Natural language provides a convenient and flexible alternative for task specification, but comes with the challenge of grounding language in the robot's observation space. To scalably learn this grounding we propose to leverage offline pre-collected robotic datasets (including highly sub-optimal, autonomously-collected data) with crowd-sourced natural language labels. With this data, we learn a simple classifier which predicts if a change in state completes a language instruction. This provides a language-conditioned reward function that can then be used for offline multi-task RL. In our experiments, we find that on language-conditioned manipulation tasks our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%, and is able to perform a range of visuomotor tasks from natural language, such as “open the right drawer” and “move the stapler”, on a Franka Emika Panda robot.
Supplementary Material: zip
Poster: jpg
19 Replies

Loading