Abstract: Humans rely on their visual and tactile senses to develop a comprehensive 3D understanding of their physical
environment. Recently, there has been a growing interest
in exploring and manipulating objects using data-driven
approaches that utilise high-resolution vision-based tactile
sensors. However, 3D shape reconstruction using tactile
sensing has lagged behind visual shape reconstruction because of limitations in existing techniques, including the
inability to generalise over unseen shapes, the absence of
real-world testing, and limited expressive capacity imposed
by discrete representations. To address these challenges,
we propose TouchSDF, a Deep Learning approach for tactile 3D shape reconstruction that leverages the rich information provided by a vision-based tactile sensor and the
expressivity of the implicit neural representation DeepSDF.
Our technique consists of two components: (1) a Convolutional Neural Network that maps tactile images into local
meshes representing the surface at the touch location, and
(2) an implicit neural function that predicts a signed distance function to extract the desired 3D shape. This combination allows TouchSDF to reconstruct smooth and continuous 3D shapes from tactile inputs in simulation and realworld settings, opening up research avenues for robust 3Daware representations and improved multimodal perception
in robotics. are available at: https://touchsdf.github.io/
Loading