4D Model-Based Spatiotemporal Alignment of Scripted Taiji Quan SequencesDownload PDFOpen Website

2017 (modified: 10 Nov 2022)ICCV Workshops 2017Readers: Everyone
Abstract: We develop a computational tool that aligns motion capture (mocap) data to videos of 24-form simplified Taiji (TaiChi) Quan, a scripted motion sequence about 5 minutes long. With only prior knowledge that the subjects in video and mocap perform a similar pose sequence, we establish inter-subject temporal synchronization and spatial alignment of mocap and video based on body joint correspondences. Through time alignment and matching the viewpoint and orientation of the video camera, the 3D body joints from mocap data of subject A can be correctly projected onto the video performance of subject B. Initial quantitative evaluation of this alignment method shows promise in offering the first validated algorithmic treatment for cross-subject comparison of Taiji Quan performances. This work opens the door to subject-specific quantified comparison of long motion sequences beyond Taiji.
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview