Dance2Hesitate: A Multi-Modal Dataset of Dancer-Taught Hesitancy for Understandable Robot Motion

Published: 26 Feb 2026, Last Modified: 12 Mar 2026D-TUR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human-Robot Interaction, Expressive Motion, Hesitancy
TL;DR: We release Dance2Hesitate, a multi-modal, dancer-generated dataset that captures graded hesitancy in both a Franka Panda reaching task and synchronized human RGB-D motion capture, enabling research on hesitancy as understandable robot motion.
Abstract: In human-robot collaboration, a robot's expression of \textsl{hesitancy} is a critical factor that shapes human coordination strategies, attention allocation, and safety-related judgments. However, designing \textsl{hesitant} robot motion that generalizes is challenging because the observer's inference is highly dependent on embodiment and context. To address these challenges, we introduce and open-source a multi-modal, dancer-generated dataset of \textsl{hesitant} motion where we focus on specific context-embodiment pairs (i.e., manipulator/ human upper-limb approaching a Jenga Tower, and anthropomorphic whole body motion in free space). The dataset includes (i) kinesthetic teaching demonstrations on a Franka Emika Panda reaching from a fixed start configuration to a fixed target (a Jenga tower) with three graded hesitancy levels (slight, significant, extreme) and (ii) synchronized RGB-D motion capture of dancers performing the same reaching behavior using their upper limb across three hesitancy levels, plus full human body sequences for extreme hesitancy. We further provide documentation to enable reproducible benchmarking across robot and human modalities. Across all dancers, we obtained 70 unique whole-body trajectories, 84 upper limb trajectories spanning over the three hesitancy levels, and 66 kinesthetic teaching trajectories spanning over the three hesitancy levels. The dataset can be accessed here: https://brsrikrishna.github.io/Dance2Hesitate/.
Submission Number: 9
Loading