Chunked Autoregressive GAN for Conditional Waveform SynthesisDownload PDF

29 Sept 2021, 00:35 (modified: 03 Mar 2022, 22:59)ICLR 2022 PosterReaders: Everyone
Keywords: audio generation, speech synthesis, deep learning, generative models, autoregression, generative adversarial networks
Abstract: Conditional waveform synthesis models learn a distribution of audio waveforms given conditioning such as text, mel-spectrograms, or MIDI. These systems employ deep generative models that model the waveform via either sequential (autoregressive) or parallel (non-autoregressive) sampling. Generative adversarial networks (GANs) have become a common choice for non-autoregressive waveform synthesis. However, state-of-the-art GAN-based models produce artifacts when performing mel-spectrogram inversion. In this paper, we demonstrate that these artifacts correspond with an inability for the generator to learn accurate pitch and periodicity. We show that simple pitch and periodicity conditioning is insufficient for reducing this error relative to using autoregression. We discuss the inductive bias that autoregression provides for learning the relationship between instantaneous frequency and phase, and show that this inductive bias holds even when autoregressively sampling large chunks of the waveform during each forward pass. Relative to prior state-of-the-art GAN-based models, our proposed model, Chunked Autoregressive GAN (CARGAN) reduces pitch error by 40-60%, reduces training time by 58%, maintains a fast inference speed suitable for real-time or interactive applications, and maintains or improves subjective quality.
One-sentence Summary: We improve the state-of-the-art of conditional waveform synthesis by combining the strengths of GANs and autoregression
Supplementary Material: zip
14 Replies

Loading