Meta-learning richer priors for VAEsDownload PDF

22 Nov 2021, 10:13 (modified: 24 Jan 2022, 21:38)AABI 2022 PosterReaders: Everyone
Keywords: VAE, VAEs, MAML, meta-learning, VampPrior
TL;DR: We employ MAML to obtain richer priors for VAEs and propose a prior that encourages high-level representations
Abstract: Variational auto-encoders have proven to capture complicated data distributions and useful latent representations, while advances in meta-learning have made it possible to extract prior knowledge from data. We incorporate these two approaches and propose a novel flexible prior, namely the Pseudo-inputs prior, to obtain a richer latent space. We train VAEs using the Model-Agnostic Meta-Learning (MAML) algorithm and show that it achieves comparable reconstruction performance with standard training. However, we show that this MAML-VAE model learns richer latent representations, which we evaluate in terms of unsupervised few-shot classification as a downstream task. Moreover, we show that our proposed Pseudo-inputs prior outperforms baseline priors, including the VampPrior, in both models, while also encouraging high-level representations through its pseudo-inputs.
1 Reply