Abstract: Few existing methods produce full-body user motion in virtual en-vironments from only the tracking from a consumer-level head-mounted-display. This preliminary project generates full-body motions from the user's hands and head positions through data-based motion accentuation. The method is evaluated in a simple collaborative scenario with one Pointer, represented by an avatar, pointing at targets while an Observer interprets the Pointer's movements. The Pointer's motion is modified by our motion accentuation algorithm SocialMoves. Comparisons on the Pointer's motion are made be-tween SocialMoves, a system built around Final IK, and a ground truth capture. Our method showed the same level of user experience as the ground truth method.
Loading