Abstract: In this work, we focus on generating both the hand and objects in a grasp by a single diffusion model. Our proposed Joint Hand-Object Diffusion (JHOD) models the hand and object in a unified latent representation. It leverages large-scale object datasets to learn an inclusive object latent embedding. Also, it uses the hand-object grasping data to learn to accommodate hand and object embedding to form grasps. With or without a given object as an optional condition, the diffusion model can generate grasps unconditionally or conditional to the object. Compared to the usual practice of learning object-conditioned grasp generation from only hand-object grasp data, our method benefits from more diverse object data used for training to handle grasp generation more universally. According to both qualitative and quantitative experiments, both conditional and unconditional generation of hand grasp achieves good visual plausibility and diversity. The proposed method generalizes well to unseen object shapes. The code and weights will be made public upon acceptance.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Shuangfei_Zhai3
Submission Number: 4491
Loading