Visuo-Tactile Transformers for ManipulationDownload PDF

Published: 10 Sept 2022, Last Modified: 05 May 2023CoRL 2022 PosterReaders: Everyone
Keywords: Multimodal Learning, Reinforcement Learning, Manipulation
TL;DR: VTT uses multimodal feedback together with self and cross-modal attention to build latent heatmap representations that seamlessly integrate vision and touch
Abstract: Learning representations in the joint domain of vision and touch can improve manipulation dexterity, robustness, and sample-complexity by exploiting mutual information and complementary cues. Here, we present Visuo-Tactile Transformers (VTTs), a novel multimodal representation learning approach suited for model-based reinforcement learning and planning. Our approach extends the Visual Transformer to handle visuo-tactile feedback. Specifically, VTT uses tactile feedback together with self and cross-modal attention to build latent heatmap representations that focus attention on important task features in the visual domain. We demonstrate the efficacy of VTT for representation learning with a comparative evaluation against baselines on four simulated robot tasks and one real world block pushing task. We conduct an ablation study over the components of VTT to highlight the importance of cross-modality in representation learning for robotic manipulation.
Student First Author: yes
Supplementary Material: zip
Website: https://www.mmintlab.com/vtt
Code: https://github.com/yich7045/Visuo-Tactile-Transformers-for-Manipulation
12 Replies

Loading