Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
A Deep Learning Approach for Motion Retargeting
Hanyoung Jang, Byungjun Kwon, Moonwon Yu, Jongmin Kim
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Motion retargeting is a process of copying the motion from one (source) to another (target) character when those body sizes and proportion (e.g, arms, legs, torso, and so on) are different. One of the simplest ways to retarget the human motion is manually modifying its joint angles one at a time, however, it would be a difficult and tedious task to get it done for whole the joints of the given motion sequences. Therefore, the problem of automatic motion retargeting has been studied for several decades, however, the quality of resulting motion is on occasion unrealistic as it can hardly consider the details and nuance of human movements when utilizing the numerical optimization only. To address this issues, we present a novel human motion retargeting system using deep learning framework with motion capture data to produce the high-quality human motion. We make use of the deep autoencoder composed of the convolutional layers and a fully connected layer. Our results show that it adjusts a character to meet the kinematic constraints such as bone lengths and foot placements without noticeable artifacts. Furthermore, the proposed method requires the source motion and bone length ratio as input, and it is more intuitive for users than the previous methods. We believe that our method is desirable enough to be used in the game and VFX productions.
Keywords:Motion Retargeting, Character Animation, Convolutional Autoencoder
Enter your feedback below and we'll get back to you as soon as possible.