That Sounds Right: Auditory Self-Supervision for Dynamic Robot ManipulationDownload PDF

Published: 30 Aug 2023, Last Modified: 17 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Dynamic manipulation, Self supervised learning, Audio
TL;DR: Learning contact-rich, dynamic manipulation behaviors using self-supervised techniques in audio.
Abstract: Learning to produce contact-rich, dynamic behaviors from raw sensory data has been a longstanding challenge in robotics. Prominent approaches primarily focus on using visual and tactile sensing. However, pure vision often fails to capture high-frequency interaction, while current tactile sensors can be too delicate for large-scale data collection. In this work, we propose a data-centric approach to dynamic manipulation that uses an often ignored source of information -- sound. We first collect a dataset of 25k interaction-sound pairs across five dynamic tasks using contact microphones. Then, given this data, we leverage self-supervised learning to accelerate behavior prediction from sound. Our experiments indicate that this self-supervised `pretraining' is crucial to achieving high performance, with a 34.5% lower MSE than plain supervised learning and a 54.3% lower MSE over visual training. Importantly, we find that when asked to generate desired sound profiles, online rollouts of our models on a UR10 robot can produce dynamic behavior that achieves an average of 11.5% improvement over supervised learning on audio similarity metrics. Videos and audio data are best seen on our project website: aurl-anon.github.io
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Video: https://youtu.be/LqD64FlLj0o?si=kmy7kmrQpk-VSnPY
Website: https://audio-robot-learning.github.io/
Code: https://github.com/abitha-thankaraj/audio-robot-learning
Publication Agreement: pdf
Poster Spotlight Video: mp4
11 Replies

Loading