RV-VAE: Integrating Random Variable Algebra into Variational Autoencoders

Published: 31 Jul 2023, Last Modified: 18 Aug 2023VIPriors 2023 OralPosterTBDEveryoneRevisionsBibTeX
Keywords: VAE, random variable algebra, probabilistic modeling
TL;DR: Integrating random variable algebra into Variational Autoencoders for decoding continuous distributions instead of samples (reparameterization trick).
Abstract: Among deep generative models, variational autoencoders (VAEs) are a central approach in generating new samples from a learned, latent space while effectively reconstructing input data. The original formulation requires a stochastic sampling operation, implemented via the reparameterization trick, to approximate a posterior latent distribution. In this paper, we introduce a novel approach that leverages the full distributions of encoded input to optimize the model over the entire range of the data, instead of discrete samples. We treat the encoded distributions as continuous random variables and use operations defined by the algebra of random variables during decoding. This approach integrates an innate mathematical prior into the model, helping to improve data efficiency and reduce computational load. Experimental results across different datasets and architectures confirm that this modification enhances VAE-based architectures' performance. Specifically, our approach improves the reconstruction error and generative capabilities of several VAE architectures, as measured by the Fréchet Inception Distance (FID) metric, while exhibiting similar or better training convergence behavior. Our method exemplifies the power of combining deep learning with inductive priors, promoting data efficiency and less reliance on brute-force learning.
Supplementary Material: pdf
Submission Number: 18
Loading