Sliced Wasserstein Auto-EncodersDownload PDF

27 Sept 2018, 22:38 (modified: 10 Feb 2022, 11:39)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: optimal transport, Wasserstein distances, auto-encoders, unsupervised learning
TL;DR: In this paper we use the sliced-Wasserstein distance to shape the latent distribution of an auto-encoder into any samplable prior distribution.
Abstract: In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or having a likelihood function specified. In short, we regularize the auto-encoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a samplable prior distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Auto-Encoders (WAE) and Variational Auto-Encoders (VAE), while benefiting from an embarrassingly simple implementation. We provide extensive error analysis for our algorithm, and show its merits on three benchmark datasets.
Data: [CelebA](https://paperswithcode.com/dataset/celeba)
12 Replies

Loading