ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data

Published: 05 Sept 2024, Last Modified: 08 Nov 2024CoRL 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Robot Manipulation, Imitation Learning, Audio
TL;DR: We introduce ManiWAV: a data collection device to collect in-the-wild human demonstrations with synchronous audio and visual feedback, and a corresponding policy interface to learn robot manipulation policy directly from the demonstrations.
Abstract: Audio signals provide rich information for the robot interaction and object properties through contact. These information can surprisingly ease the learning of contact-rich robot manipulation skills, especially when the visual information alone is ambiguous or incomplete. However, the usage of audio data in robot manipulation has been constrained to teleoperated demonstrations collected by either attaching a microphone to the robot or object, which significantly limits its usage in robot learning pipelines. In this work, we introduce ManiWAV: an 'ear-in-hand' data collection device to collect in-the-wild human demonstrations with synchronous audio and visual feedback, and a corresponding policy interface to learn robot manipulation policy directly from the demonstrations. We demonstrate the capabilities of our system through four contact-rich manipulation tasks that require either passively sensing the contact events and modes, or actively sensing the object surface materials and states. In addition, we show that our system can generalize to unseen in-the-wild environments, by learning from diverse in-the-wild human demonstrations. All data, code, and policy will be public.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://www.youtube.com/watch?v=SzHENLZ7_tc
Website: https://maniwav.github.io/
Code: https://github.com/real-stanford/maniwav
Publication Agreement: pdf
Student Paper: yes
Submission Number: 231
Loading