Generating Diverse High-Resolution Images with VQ-VAEDownload PDF

27 Mar 2019 (modified: 02 Jul 2019)ICLR 2019 Workshop DeepGenStruct Blind SubmissionReaders: Everyone
  • Keywords: Vector Quantization, Autoregressive models, Generative Models
  • TL;DR: scale and enhance VQ-VAE with powerful priors to generate near realistic images.
  • Abstract: We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation. To this end, we scale and enhance the autoregressive priors used in VQ-VAE to generate synthetic samples of much higher coherence and fidelity than possible before. We use simple feed-forward encoder and decoder networks, thus our model is an attractive candidate for applications where the encoding and decoding speed is critical. Additionally, this allows us to only sample autoregressively in the compressed latent space, which is an order of magnitude faster than sampling in the pixel space, especially for large images. We demonstrate that a multi-scale hierarchical organization of VQ-VAE, augmented with powerful priors over the latent codes, is able to generate samples with quality that rivals that of state of the art Generative Adversarial Networks on multifaceted datasets such as ImageNet, while not suffering from GAN's known shortcomings such as mode collapse and lack of diversity.
3 Replies