Synthesizing and Simulating Volumetric Meshes from Vision-based Tactile ImprintsDownload PDF

Published: 12 May 2022, Last Modified: 17 May 2023ICRA 2022 Workshop: RL for Manipulation PosterReaders: Everyone
Keywords: Tactile Sensor, Volumetric Mesh, GNN
TL;DR: This paper discusses the methods, results, and limitations of synthesizing volumetric mesh for vision-based tactile sensors.
Abstract: Vision-based tactile sensors typically employ a deformable elastomer and a camera to provide high-resolution contact images. This work focuses on learning to simulate and synthesize the volumetric mesh of the elastomer based on the image imprints acquired from tactile sensors. Obtaining accurate volumetric meshes for the elastomer can provide direct contact information and benefit robotic grasping and manipulation. Our method proposes a train-then-adapt way to leverage synthetic image-mesh pairs and real-world images from finite element methods (FEM) and physical sensors. Our approach can accurately reconstruct the deformation of the real-world tactile sensor elastomer in various domains. While the proposed learning approaches have shown to produce solutions, we discuss some limitations and challenges for viable real-world applications.
3 Replies

Loading