Keywords: Generative model, Autoencoder, Dropout, Geometric regularization
Abstract: We propose a generative model termed Deciphering Autoencoders.
In this model, we assign a unique random dropout pattern to each data point in the training dataset and then train an autoencoder to reconstruct the corresponding data point using this pattern as information to be encoded.
Even if a completely random dropout pattern is assigned to each data point regardless of their similarities, a sufficiently large encoder can smoothly map them to a low-dimensional latent space to reconstruct individual training data points.
During inference, using a dropout pattern different from those used during training allows the model to function as a generator.
Since the training of Deciphering Autoencoders relies solely on reconstruction error, it offers more stable training compared to other generative models.
Despite their simplicity, Deciphering Autoencoders show sampling quality comparable to DCGAN on the CIFAR-10 dataset.
Submission Number: 78
Loading