HOH: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count

NeurIPS 2023 Track Datasets and Benchmarks Submission789 Authors

Published: 26 Sept 2023, Last Modified: 02 Feb 2024NeurIPS 2023 Datasets and Benchmarks PosterEveryoneRevisionsBibTeX
Keywords: human-human, handover, dataset, handoff, human-robot interaction, hand-object interactions, multimodal human data collection
TL;DR: Dataset of human-human handover with 136 objects by 20 subject pairs with multiview RGBD images, 3D point clouds, hand/object 2D/3D segmentation, and ratings, for handover parameter study, hand-object pose estimation, and human-robot handover.
Abstract: We present the HOH (Human-Object-Human) Handover Dataset, a large object count dataset with 136 objects, to accelerate data-driven research on handover studies, human-robot handover implementation, and artificial intelligence (AI) on handover parameter estimation from 2D and 3D data of two-person interactions. HOH contains multi-view RGB and depth data, skeletons, fused point clouds, grasp type and handedness labels, object, giver hand, and receiver hand 2D and 3D segmentations, giver and receiver comfort ratings, and paired object metadata and aligned 3D models for 2,720 handover interactions spanning 136 objects and 20 giver-receiver pairs—40 with role-reversal—organized from 40 participants. We also show experimental results of neural networks trained using HOH to perform grasp, orientation, and trajectory prediction. As the only fully markerless handover capture dataset, HOH represents natural human-human handover interactions, overcoming challenges with markered datasets that require specific suiting for body tracking, and lack high-resolution hand tracking. To date, HOH is the largest handover dataset in terms of object count, participant count, pairs with role reversal accounted for, and total interactions captured.
Supplementary Material: pdf
Submission Number: 789
Loading