Training Deep Generative Models via Auxiliary Supervised LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: deep generative models, supervised learning, representation learning, mutual information, latent variable model
Abstract: Deep generative modeling has long been viewed as a challenging unsupervised learning problem, partly due to the lack of labels and the high dimension of the data. Although various latent variable models have been proposed to tackle these difficulties, the latent variable only serves as a device to model the observed data, and is typically averaged out during training. In this article, we show that by introducing a properly pre-trained encoder, the latent variable can play a more important role, which decomposes a deep generative model into a supervised learning problem and a much simpler unsupervised learning task. With this new training method, which we call the auxiliary supervised learning (ASL) framework, deep generative models can benefit from the enormous success of deep supervised learning and representation learning techniques. By evaluating on various synthetic and real data sets, we demonstrate that ASL is a stable, efficient, and accurate training framework for deep generative models.
One-sentence Summary: A simple yet powerful framework for training deep generative models that makes use of supervised learning and representation learning techniques.
15 Replies

Loading