Nov 03, 2016 (modified: Feb 07, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:There is a lot of research interest in encoding variable length sentences into fixed
length vectors, in a way that preserves the sentence meanings. Two common
methods include representations based on averaging word vectors, and representations based on the hidden states of recurrent neural networks such as LSTMs.
The sentence vectors are used as features for subsequent machine learning tasks
or for pre-training in the context of deep learning. However, not much is known
about the properties that are encoded in these sentence representations and about
the language information they capture.
We propose a framework that facilitates better understanding of the encoded representations. We define prediction tasks around isolated aspects of sentence structure (namely sentence length, word content, and word order), and score representations by the ability to train a classifier to solve each prediction task when
using the representation as input. We demonstrate the potential contribution of the
approach by analyzing different sentence representation mechanisms. The analysis sheds light on the relative strengths of different sentence embedding methods with respect to these low level prediction tasks, and on the effect of the encoded
vector’s dimensionality on the resulting representations.
TL;DR:A method for analyzing sentence embeddings on a fine-grained level using auxiliary prediction tasks
Keywords:Natural language processing, Deep learning
Conflicts:biu.ac.il, mit.edu, ibm.com
Enter your feedback below and we'll get back to you as soon as possible.