Cooperative Training of Descriptor and Generator NetworksDownload PDF

29 Nov 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: This paper studies the cooperative training of two probabilistic models of signals such as images. Both models are parametrized by convolutional neural networks (ConvNets). The first network is a descriptor network, which is an exponential family model or an energy-based model, whose feature statistics or energy function are defined by a bottom-up ConvNet, which maps the observed signal to the feature statistics. The second network is a generator network, which is a non-linear version of factor analysis. It is defined by a top-down ConvNet, which maps the latent factors to the observed signal. The maximum likelihood training algorithms of both the descriptor net and the generator net are in the form of alternating back-propagation, and both algorithms involve Langevin sampling. We observe that the two training algorithms can cooperate with each other by jump-starting each other’s Langevin sampling, and they can be seamlessly interwoven into a CoopNets algorithm that can train both nets simultaneously.
TL;DR: Cooperative training of the descriptor and generator networks by coupling two maximum likelihood learning algorithms.
Conflicts: ucla.edu
Keywords: Unsupervised Learning, Deep learning
11 Replies

Loading