Conditional Image Generation by Conditioning Variational Auto-EncodersDownload PDF


Sep 29, 2021 (edited Nov 23, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: variational auto-encoders, Bayesian inference, variational inference, amortized inference, image completion
  • Abstract: We present a conditional variational auto-encoder (VAE) which, to avoid the substantial cost of training from scratch, uses an architecture and training objective capable of leveraging a foundation model in the form of a pretrained unconditional VAE. Training the conditional VAE then involves training an artifact to perform amortized inference over the unconditional VAE's latent variables given a conditioning input. We demonstrate our approach on the image completion task, and show that it outperforms state-of-the-art GAN-based approaches at faithfully representing the inherent uncertainty. We conclude by describing and demonstrating an application that requires an image completion model with the capabilities ours exhibits: the use of Bayesian optimal experimental design to guide a sensor.
  • One-sentence Summary: We create fast-to-train conditional VAEs using amortized inference in pretrained unconditional VAEs, and demonstrate diverse samples on image completion tasks.
  • Supplementary Material: zip
15 Replies