Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on ImagesDownload PDF

Published: 12 Jan 2021, Last Modified: 03 Apr 2024ICLR 2021 SpotlightReaders: Everyone
Keywords: VAE, generative modeling, deep learning, likelihood-based models
Abstract: We present a hierarchical VAE that, for the first time, generates samples quickly $\textit{and}$ outperforms the PixelCNN in log-likelihood on all natural image benchmarks. We begin by observing that, in theory, VAEs can actually represent autoregressive models, as well as faster, better models if they exist, when made sufficiently deep. Despite this, autoregressive models have historically outperformed VAEs in log-likelihood. We test if insufficient depth explains why by scaling a VAE to greater stochastic depth than previously explored and evaluating it CIFAR-10, ImageNet, and FFHQ. In comparison to the PixelCNN, these very deep VAEs achieve higher likelihoods, use fewer parameters, generate samples thousands of times faster, and are more easily applied to high-resolution images. Qualitative studies suggest this is because the VAE learns efficient hierarchical visual representations. We release our source code and models at https://github.com/openai/vdvae.
One-sentence Summary: We argue deeper VAEs should perform better, implement one, and show it outperforms all PixelCNN-based autoregressive models in likelihood, while being substantially more efficient.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) openai/vdvae](https://github.com/openai/vdvae) + [![Papers with Code](/images/pwc_icon.svg) 7 community implementations](https://paperswithcode.com/paper/?openreview=RLRXCV6DbEJ)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [FFHQ](https://paperswithcode.com/dataset/ffhq)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2011.10650/code)
14 Replies

Loading