Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins

Published: 26 Oct 2024, Last Modified: 03 Dec 2024WTPEveryoneRevisionsBibTeXCC BY 4.0
Keywords: tactile sensing, robot learning
TL;DR: Framework for learning spatially generalizable, visuotactile policies for precise, contact-rich manipulation
Abstract: While visuomotor policy learning has advanced robotic manipulation, precisely executing contact-rich tasks remains challenging due to the limitations of vision in reasoning about physical interactions. To address this, recent work has sought to integrate tactile sensing into policy learning. However, many existing approaches rely on optical tactile sensors that are either restricted to recognition tasks or require complex dimensionality reduction steps for policy learning. In this work, we explore learning policies with magnetic skin sensors, which are inherently low-dimensional, highly sensitive, and inexpensive to integrate with robotic platforms. To leverage these sensors effectively, we present the ViSk framework, a simple approach that uses a transformer-based policy and treats skin sensor data as additional tokens alongside visual information. Evaluated on four complex real-world tasks involving credit card swiping, plug insertion, USB insertion, and bookshelf retrieval, ViSk significantly outperforms both vision-only and optical tactile sensing based policies. Further analysis reveals that combining tactile and visual modalities enhances policy performance and spatial generalization, achieving an average improvement of 27.5\% across tasks.
Submission Number: 8
Loading