A Geometric Perspective on Variational AutoencodersDownload PDF

Published: 31 Oct 2022, Last Modified: 11 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Variational autoencoders, latent space modeling, Riemannian geometry
TL;DR: In this paper, we adopt a new approach of the VAE framework and propose to focus on the geometric aspects that a vanilla VAE is able to capture in its latent space.
Abstract: This paper introduces a new interpretation of the Variational Autoencoder framework by taking a fully geometric point of view. We argue that vanilla VAE models unveil naturally a Riemannian structure in their latent space and that taking into consideration those geometrical aspects can lead to better interpolations and an improved generation procedure. This new proposed sampling method consists in sampling from the uniform distribution deriving intrinsically from the learned Riemannian latent space and we show that using this scheme can make a vanilla VAE competitive and even better than more advanced versions on several benchmark datasets. Since generative models are known to be sensitive to the number of training samples we also stress the method's robustness in the low data regime.
Supplementary Material: pdf
9 Replies

Loading