Keywords: dexterous grasping, generative modeling, robotic manipulation, robot learning
TL;DR: We translate grasps between different robot hands using vision and Schrödinger Bridges, enabling generalizable, stable grasp synthesis without paired data or hand-specific retraining.
Abstract: We propose a new approach to vision-based dexterous grasp translation, which aims to transfer grasp intent across robotic hands with differing morphologies. Given a visual observation of a source hand grasping an object, our goal is to synthesize a functionally equivalent grasp for a target hand without requiring paired demonstrations or hand-specific simulations. We frame this problem as a stochastic transport between grasp distributions using the Schrödinger Bridge formalism. Our method learns to map between source and target latent grasp spaces via score and flow matching, conditioned on visual observations. To guide this translation, we introduce physics-informed cost functions that encode alignment in base pose, contact maps, wrench space, and manipulability. Experiments across diverse hand-object pairs demonstrate that our approach generates stable, physically grounded grasps with strong generalization. This work enables semantic grasp transfer for heterogeneous manipulators and bridges vision-based grasping with probabilistic generative modeling. Additional details at https://grasp2grasp.github.io/.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 25683
Loading