Deep contextualized word representationsDownload PDF

02 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: We introduce a new type of deep contextualized word representation that models both (1) complex characteristics of word use (e.g., syntax and semantics), and (2) how these uses vary across linguistic contexts (i.e., to model polysemy). Our word vectors are learned functions of the internal states of a deep bidirectional language model (biLM), which is pretrained on a large text corpus. We show that these representations can be easily added to existing models and significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis. We also present an analysis showing that exposing the deep internals of the pretrained network is crucial, allowing downstream models to mix different types of semi-supervision signals.
TL;DR: We introduce a new type of deep contextualized word representation that significantly improves the state of the art for a range of challenging NLP tasks.
Keywords: representation learning, contextualized word embeddings
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 43 code implementations](https://www.catalyzex.com/paper/arxiv:1802.05365/code)
Withdrawal: Confirmed
17 Replies

Loading