Style Memory: Making a Classifier Network Generative

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Deep networks have shown great performance in classification tasks. However, the parameters learned by the classifier networks usually discard stylistic information of the input, in favour of information strictly relevant to classification. We introduce a network that has the capacity to do both classification and reconstruction by adding a "style memory" to the output layer of the network. We also show how to train such a neural network as stacked autoencoders, jointly minimizing both classification and reconstruction losses. The generative function of our network demonstrates that the combination of style-memory neurons with the classifier neurons yield good reconstructions of the inputs. We further investigate the nature of the style memory, and how it relates to composing digits from MNIST.
  • TL;DR: Augmenting the top layer of a classifier network with a style memory enables it to be generative.
  • Keywords: neural networks, autoencoder, generative, feed-back

Loading