Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Investigating Different Context Types and Representations for Learning Word Embeddings
Bofang Li, Tao Liu, Zhe Zhao, Buzhou Tang, Xiaoyong Du
Nov 04, 2016 (modified: Jan 25, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:The number of word embedding models is growing every year. Most of them learn word embeddings based on the co-occurrence information of words and their context. However, it's still an open question what is the best definition of context. We provide the first systematical investigation of different context types and representations for learning word embeddings. We conduct comprehensive experiments to evaluate their effectiveness under 4 tasks (21 datasets), which give us some insights about context selection. We hope that this paper, along with the published code, can serve as a guideline of choosing context for our community.
TL;DR:This paper investigate different context types and representations for learning word embeddings.
Keywords:Unsupervised Learning, Natural language processing
Enter your feedback below and we'll get back to you as soon as possible.