Geometry Matching for Multi-Embodiment GraspingDownload PDF

Published: 30 Aug 2023, Last Modified: 06 Oct 2023CoRL 2023 PosterReaders: Everyone
Keywords: Multi-Embodiment, Dexterous Grasping, Graph Neural Networks
Abstract: While significant progress has been made on the problem of generating grasps, many existing learning-based approaches still concentrate on a single embodiment, provide limited generalization to higher DoF end-effectors and cannot capture a diverse set of grasp modes. In this paper, we tackle the problem of grasping multi-embodiments through the viewpoint of learning rich geometric representations for both objects and end-effectors using Graph Neural Networks (GNN). Our novel method - GeoMatch - applies supervised learning on grasping data from multiple embodiments, learning end-to-end contact point likelihood maps as well as conditional autoregressive prediction of grasps keypoint-by-keypoint. We compare our method against 3 baselines that provide multi-embodiment support. Our approach performs better across 3 end-effectors, while also providing competitive diversity of grasps. Examples can be found at geomatch.github.io.
Student First Author: yes
Supplementary Material: zip
Instructions: I have read the instructions for authors (https://corl2023.org/instructions-for-authors/)
Website: https://geo-match.github.io/
Publication Agreement: pdf
Poster Spotlight Video: mp4
15 Replies

Loading