SampleRNN: An Unconditional End-to-End Neural Audio Generation ModelDownload PDF

Published: 21 Jul 2022, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: In this paper we propose a novel model for unconditional audio generation task that generates one audio sample at a time. We show that our model which profits from combining memory-less modules, namely autoregressive multilayer perceptron, and stateful recurrent neural networks in a hierarchical structure is de facto powerful to capture the underlying sources of variations in temporal domain for very long time on three datasets of different nature. Human evaluation on the generated samples indicate that our model is preferred over competing models. We also show how each component of the model contributes to the exhibited performance.
TL;DR: Novel model for unconditional audio generation task using hierarchical multi-scale RNNs and autoregressive MLP.
Keywords: Speech, Deep learning, Unsupervised Learning, Applications
Conflicts: umontreal.ca, iitk.ac.in
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1612.07837/code)
16 Replies

Loading