Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
SampleRNN: An Unconditional End-to-End Neural Audio Generation Model
Soroush Mehri, Kundan Kumar, Ishaan Gulrajani, Rithesh Kumar, Shubham Jain, Jose Sotelo, Aaron Courville, Yoshua Bengio
Nov 04, 2016 (modified: Feb 24, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
TL;DR:Novel model for unconditional audio generation task using hierarchical multi-scale RNNs and autoregressive MLP.
Keywords:Speech, Deep learning, Unsupervised Learning, Applications
Enter your feedback below and we'll get back to you as soon as possible.