A New Method of Region Embedding for Text ClassificationDownload PDF

15 Feb 2018 (modified: 27 Jun 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: To represent a text as a bag of properly identified “phrases” and use the representation for processing the text is proved to be useful. The key question here is how to identify the phrases and represent them. The traditional method of utilizing n-grams can be regarded as an approximation of the approach. Such a method can suffer from data sparsity, however, particularly when the length of n-gram is large. In this paper, we propose a new method of learning and utilizing task-specific distributed representations of n-grams, referred to as “region embeddings”. Without loss of generality we address text classification. We specifically propose two models for region embeddings. In our models, the representation of a word has two parts, the embedding of the word itself, and a weighting matrix to interact with the local context, referred to as local context unit. The region embeddings are learned and used in the classification task, as parameters of the neural network classifier. Experimental results show that our proposed method outperforms existing methods in text classification on several benchmark datasets. The results also indicate that our method can indeed capture the salient phrasal expressions in the texts.
Keywords: region embedding, local context unit, text classification
Code: [![github](/images/github_icon.svg) text-representation/local-context-unit](https://github.com/text-representation/local-context-unit)
Data: [Yelp Review Polarity](https://paperswithcode.com/dataset/yelp-review-polarity)
26 Replies