Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Cutting Recursive Autoencoder Trees
Christian Scheible, Hinrich Schuetze
Jan 20, 2013 (modified: Jan 20, 2013)ICLR 2013 conference submissionreaders: everyone
Abstract:Deep Learning models enjoy considerable success in Natural Language Processing. While deep architectures produce useful representations that lead to improvements in various tasks, they are often difficult to interpret. This makes the analysis of learned structures particularly difficult. We therefore have to rely on empirical tests to see whether a particular structure makes sense. In this paper, we present an analysis of a well-received model that produces structural representations of text: the Semi-Supervised Recursive Autoencoder. We show that for certain tasks, the structure of the autoencoder may be significantly reduced and we evaluate the produced structures through human judgment.
Enter your feedback below and we'll get back to you as soon as possible.