Joint Diffusion for Universal Hand-Object Grasp Generation

Published: 01 Dec 2025, Last Modified: 01 Dec 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Predicting and generating human hand grasp over objects is critical for animation and robotic tasks. In this work, we focus on generating both the hand and objects in a grasp by a single diffusion model. Our proposed Joint Hand-Object Diffusion (JHOD) models the hand and object in a unified latent representation. It uses the hand-object grasping data to learn to accommodate hand and object to form plausible grasps. Also, to enforce the generalizability over diverse object shapes, it leverages large-scale object datasets to learn an inclusive object latent embedding. With or without a given object as an optional condition, the diffusion model can generate grasps unconditionally or conditional to the object. Compared to the usual practice of learning object-conditioned grasp generation from only hand-object grasp data, our method benefits from more diverse object data used for training to handle grasp generation more universally. According to both qualitative and quantitative experiments, both conditional and unconditional generation of hand grasp achieves good visual plausibility and diversity. With the extra inclusiveness of object representation learned from large-scale object datasets, the proposed method generalizes well to unseen object shapes.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We clarified the concerns from reviewers and added the corresponding captioning to the draft. We added the figures about hand contact area and more qualitative comparison with other methods in the appendix. We corrected writing typo and improved writing details.
Assigned Action Editor: ~Shuangfei_Zhai3
Submission Number: 4491
Loading