Compact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text ClassificationDownload PDF

15 Feb 2018 (modified: 23 Jan 2023)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate description of the morphology of text, as it is able to represent prefixes, declensions, and inflections with similar vectors and are able to represent even unseen words on the training dataset. Similarly, as it is compact yet sparse, is ideal for speed up training times using tensor processing libraries. As part of this paper, we show that this technique is especially effective when coupled with convolutional neural networks (CNNs) for text classification at character-level. We apply two variants of CNN coupled with it. Experimental results show that it drastically reduces the number of parameters to be optimized, resulting in competitive classification accuracy values in only a fraction of the time spent by one-hot encoding representations, thus enabling training in commodity hardware.
TL;DR: Using Compressing tecniques to Encoding of Words is a possibility for faster training of CNN and dimensionality reduction of representation
Keywords: Character Level Convolutional Networks, Text Classification, Word Compressing
Data: [Yahoo! Answers](https://paperswithcode.com/dataset/yahoo-answers)
16 Replies

Loading