Encoding and Decoding Representations with Sum- and Max-Product Networks

Antonio Vergari, Robert Peharz, Nicola Di Mauro, Floriana Esposito

Nov 04, 2016 (modified: Jan 11, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: Sum-Product networks (SPNs) are expressive deep architectures for representing probability distributions, yet allowing exact and efficient inference. SPNs have been successfully applied in several domains, however always as black-box distribution estimators. In this paper, we argue that due to their recursive definition, SPNs can also be naturally employed as hierarchical feature extractors and thus for unsupervised representation learning. Moreover, when converted into Max-Product Networks (MPNs), it is possible to decode such representations back into the original input space. In this way, MPNs can be interpreted as a kind of generative autoencoder, even if they were never trained to reconstruct the input data. We show how these learned representations, if visualized, indeed correspond to "meaningful parts" of the training data. They also yield a large improvement when used in structured prediction tasks. As shown in extensive experiments, SPN and MPN encoding and decoding schemes prove very competitive against the ones employing RBMs and other stacked autoencoder architectures.
  • TL;DR: Sum-Product Networks can be effectively employed for unsupervised representation learning, when turned into Max-Product Networks, they can also be used as encoder-decoders
  • Conflicts: uniba.it, tugraz.at, medunigraz.at, cs.washington.edu

Loading