A Geometric Perspective on Variational AutoencodersDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Variational Autoencoders, Riemannian geometry
Abstract: In this paper, we propose a geometrical interpretation of the Variational Autoencoder framework. We show that VAEs naturally unveil a Riemannian structure of the learned latent space. Moreover, we show that using these geometrical considerations can significantly improve the generation from the vanilla VAE which can now compete with more advanced VAE models on four benchmark data sets. In particular, we propose a new way to generate samples consisting in sampling from the uniform distribution deriving intrinsically from the Riemannian manifold learned by a VAE. We also stress the proposed method's robustness in the low data regime which is known as very challenging for deep generative models. Finally, we validate the method on a complex neuroimaging data set combining both high dimensional data and low sample sizes.
One-sentence Summary: In this paper, we adopt a new approach of the VAE framework and propose to focus on the geometric aspects that a vanilla VAE is able to capture in its latent space.
Supplementary Material: zip
18 Replies

Loading