DELAYED SKIP CONNECTIONS FOR MUSIC CONTENT DRIVEN MOTION GENERATIONDownload PDF

12 Feb 2018ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: In this study, we employ skip connections into a deep recurrent neural network for modeling basic dance steps using audio as input. Our model consists of two blocks, one encodes the audio input sequences, and another generates the motion. The encoder uses a configuration called convolutional, long short-term memory deep neural network (CLDNN) which handle the power features of audio. Furthermore, we implement skip connections between the contexts of music encoder and motion decoder (i.e. delayed skip) for consistent motion generation. The experimental results show that the trained model generate predictive basic dance steps from a narrow dataset with low error and maintains similar motion beat fscore to the baseline dancer.
TL;DR: Employing skip connections into a deep recurrent neural network for modeling basic dance steps using audio as input
Keywords: Deep Learning, Skip Connectios, CLDRNN
4 Replies

Loading