Cutting Recursive Autoencoder Trees

Christian Scheible, Hinrich Schuetze

Jan 20, 2013 (modified: Jan 20, 2013) ICLR 2013 conference submission readers: everyone
  • Decision: conferencePoster-iclr2013-conference
  • Abstract: Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment.