A STRUCTURED SELF-ATTENTIVE SENTENCE EMBEDDINGDownload PDF

Published: 21 Jul 2022, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: This paper proposes a new model for extracting an interpretable sentence embedding by introducing self-attention. Instead of using a vector, we use a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. We also propose a self-attention mechanism and a special regularization term for the model. As a side effect, the embedding comes with an easy way of visualizing what specific parts of the sentence are encoded into the embedding. We evaluate our model on 3 different tasks: author profiling, sentiment classification and textual entailment. Results show that our model yields a significant performance gain compared to other sentence embedding methods in all of the 3 tasks.
TL;DR: a new model for extracting an interpretable sentence embedding by introducing self-attention and matrix representation.
Conflicts: us.ibm.com, iro.umontreal.ca, umontreal.ca
Keywords: Natural language processing, Deep learning, Supervised Learning
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 48 code implementations](https://www.catalyzex.com/paper/arxiv:1703.03130/code)
15 Replies

Loading