Unscented AutoencoderDownload PDF

22 Sept 2022 (modified: 12 Mar 2024)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: generative models, variational autoencoders, deterministic autoencoders, unscented transform, wasserstein metric
TL;DR: Sampling fixed sigma points and regularizing posterior moments in VAEs promotes reconstruction quality while preserving a smooth latent space.
Abstract: The Variational Autoencoder (VAE) is a seminal approach in deep generative modeling with latent variables. It performs posterior inference by parameterizing a distribution of latent variables in the stochastic encoder (while penalizing the disparity to an assumed standard normal prior), and achieves sample reconstruction via a deterministic decoder. In our work, we start from a simple interpretation of the reconstruction process: a nonlinear transformation of the stochastic encoder. We apply the Unscented Transform (UT) from the field of filtering and control -- a well-known distribution approximation used in the Unscented Kalman Filter (UKF). A finite set of statistics called sigma points that are sampled deterministically provides a more informative and lower-variance posterior representation than the ubiquitous noise-scaling of the reparameterization trick. Inspired by the unscented transform, we derive a novel deterministic flavor of the VAE, the Unscented Autoencoder (UAE), trained purely with regularization-like terms on the per-sample, full-covariance posterior. A key ingredient for the good performance is the Wasserstein distribution metric in place of the Kullback-Leibler (KL) divergence, effectively performing covariance matrix regularization while allowing for a sharper posterior, which especially benefits reconstruction. Nevertheless, our results are consistent with recent findings showing that deterministic models can ensure good sample quality and smooth interpolation in the latent space. We empirically show superior performance in Fréchet Inception Distance (FID) scores over closely-related models, in addition to a lower training variance than the VAE.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Generative models
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2306.05256/code)
9 Replies

Loading