SHADOW: Leveraging Segmentation Masks for Cross-Embodiment Policy Transfer

Published: 05 Sept 2024, Last Modified: 22 Oct 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cross-embodiment learning, Imitation Learning, Manipulation
TL;DR: We introduce Shadow, an efficient data editing scheme for robust cross-embodiment learning from a source to a target robot. Shadow overlays composite segmentation masks on the input images to train and evaluate this policy.
Abstract: Data collection in robotics is spread across diverse hardware, and this variation will increase as new hardware is developed. Effective use of this growing body of data requires methods capable of learning from diverse robot embodiments. We consider the setting of training a policy using expert trajectories from a single robot arm (the source), and evaluating on a different robot arm for which no data was collected (the target). We present a data editing scheme termed Shadow, in which the robot during training and evaluation is replaced with a composite segmentation mask of the source and target robots. In this way, the input data distribution at train and test time match closely, enabling robust policy transfer to the new unseen robot while being far more data efficient than approaches that require co-training on large amounts of data from diverse embodiments. We demonstrate that an approach as simple as Shadow is effective both in simulation on varying tasks and robots, and on real robot hardware, where Shadow demonstrates over 2x improvement in success rate compared to the strongest baseline.
Supplementary Material: zip
Spotlight Video: mp4
Website: https://shadow-cross-embodiment.github.io/
Publication Agreement: pdf
Student Paper: yes
Submission Number: 515
Loading