Self-Supervised Variational Auto-EncodersDownload PDF

Published: 21 Dec 2020, Last Modified: 12 Mar 2024AABI2020Readers: Everyone
Keywords: variational inference, self-supervised learning, generative modeling
TL;DR: We propose to utilize self-supervised transformations to decompose modeling a complex distribution into modeling simpler (conditional) distributions.
Abstract: Variational Auto-Encoders (VAEs) constitute a single framework to achieve density estimation, compression, and data generation. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete transformations of data. The models allow performing both conditional and unconditional sampling while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present the performance of our approach on Cifar10, Imagenette64, and CelebA.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2010.02014/code)
1 Reply

Loading