Keywords: Tactile Sensing, Learning from Human, Data Collection, Imitation Learning, Reinforcement Learning
TL;DR: MimicTouch is a multi-modal imitation learning framework that efficiently collects human tactile demonstrations, learns human-like tactile-guided control strategies from them, and zero-shot generalizes to different task settings.
Abstract: Tactile sensing is critical to fine-grained, contact-rich manipulation tasks, such as insertion and assembly. Prior research has shown the possibility of learning tactile-guided policy from teleoperated demonstration data. However, to provide the demonstration, human users often rely on visual feedback to control the robot. This creates a gap between the sensing modality used for controlling the robot (visual) and the modality of interest (tactile). To bridge this gap, we introduce "MimicTouch'', a novel framework for learning policies directly from demonstrations provided by human users with their hands. The key innovations are i) a human tactile data collection system which collects multi-modal tactile dataset for learning human's tactile-guided control strategy, ii) an imitation learning-based framework for learning human's tactile-guided control strategy through such data, and iii) an online residual RL framework to bridge the embodiment gap between the human hand and the robot gripper. Through comprehensive experiments, we highlight the efficacy of utilizing human's tactile-guided control strategy to resolve contact-rich manipulation tasks. The project website is at https://sites.google.com/view/MimicTouch.
Supplementary Material: zip
Spotlight Video: mp4
Video: https://youtu.be/RWHoZOUtQvg?si=OXFG8drB90v7rZ8L
Website: https://sites.google.com/view/MimicTouch
Publication Agreement: pdf
Student Paper: yes
Submission Number: 384
Loading