SufiSent - Universal Sentence Representations Using Suffix Encodings

Siddhartha Brahma

Feb 11, 2018 (modified: Jun 04, 2018) ICLR 2018 Workshop Submission readers: everyone Show Bibtex
  • Abstract: Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a method to learn such representations by encoding the suffixes of word sequences in a sentence and training on the Stanford Natural Language Inference (SNLI) dataset. We demonstrate the effectiveness of our approach by evaluating it on the SentEval benchmark, improving on existing approaches on several transfer tasks.
  • Keywords: universal sentence representation, LSTM, natural language inference
  • TL;DR: Using LSTM encodings of both prefixes and suffixes gives better universal sentence representations.
0 Replies

Loading