DexGraspNet 2.0: Learning Generative Dexterous Grasping in Large-scale Synthetic Cluttered Scenes

Published: 02 Jul 2024, Last Modified: 15 Jul 2024DM 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Paper Submission Track
Keywords: Dexterous Grasping, Synthetic Data, Generative Models
Abstract: Grasping in cluttered scenes remains highly challenging for dexterous hands due to the scarcity of data. To address this problem, we present a large-scale synthetic dataset, encompassing 1319 objects, 8270 scenes, and 426 million grasps. Beyond benchmarking, we also explore data-efficient learning strategies from grasping data. We reveal that the combination of a generative model conditioned on local features and a grasp dataset that emphasizes complex scene variations is key to achieving effective generalization. Our proposed generative method outperforms all baselines in simulation experiments. Furthermore, it demonstrates zero-shot sim-to-real transfer through test time depth restoration, attaining 90.70% real-world dexterous grasping success rate, showcasing the robust potential of utilizing fully synthetic training data.
Supplementary Material: zip
Submission Number: 197
Loading