Meta-learning richer priors for VAEsDownload PDF

Published: 29 Jan 2022, Last Modified: 05 May 2023AABI 2022 PosterReaders: Everyone
Keywords: VAE, VAEs, MAML, meta-learning, VampPrior
TL;DR: We employ MAML to obtain richer priors for VAEs and propose a prior that encourages high-level representations
Abstract: Variational auto-encoders have proven to capture complicated data distributions and useful latent representations, while advances in meta-learning have made it possible to extract prior knowledge from data. We incorporate these two approaches and propose a novel flexible prior, namely the Pseudo-inputs prior, to obtain a richer latent space. We train VAEs using the Model-Agnostic Meta-Learning (MAML) algorithm and show that it achieves comparable reconstruction performance with standard training. However, we show that this MAML-VAE model learns richer latent representations, which we evaluate in terms of unsupervised few-shot classification as a downstream task. Moreover, we show that our proposed Pseudo-inputs prior outperforms baseline priors, including the VampPrior, in both models, while also encouraging high-level representations through its pseudo-inputs.
1 Reply

Loading