GANSynth: Adversarial Neural Audio SynthesisDownload PDF

Published: 21 Dec 2018, Last Modified: 14 Oct 2024ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Efficient audio synthesis is an inherently difficult machine learning task, as human perception is sensitive to both global structure and fine-scale waveform coherence. Autoregressive models, such as WaveNet, model local structure at the expense of global latent structure and slow iterative sampling, while Generative Adversarial Networks (GANs), have global latent conditioning and efficient parallel sampling, but struggle to generate locally-coherent audio waveforms. Herein, we demonstrate that GANs can in fact generate high-fidelity and locally-coherent audio by modeling log magnitudes and instantaneous frequencies with sufficient frequency resolution in the spectral domain. Through extensive empirical investigations on the NSynth dataset, we demonstrate that GANs are able to outperform strong WaveNet baselines on automated and human evaluation metrics, and efficiently generate audio several orders of magnitude faster than their autoregressive counterparts.
Keywords: GAN, Audio, WaveNet, NSynth, Music
TL;DR: High-quality audio synthesis with GANs
Code: [![Papers with Code](/images/pwc_icon.svg) 6 community implementations](https://paperswithcode.com/paper/?openreview=H1xQVn09FX)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [NSynth](https://paperswithcode.com/dataset/nsynth)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/gansynth-adversarial-neural-audio-synthesis/code)
8 Replies

Loading