Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Consistent Alignment of Word Embedding Models
Cem Safak Sahin, Rajmonda S. Caceres, Brandon Oselio, William M. Campbell
Feb 17, 2017 (modified: Feb 17, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Word embedding models offer continuous vector representations that can capture rich contextual semantics based on their word co-occurrence patterns. While these word vectors can provide very effective features used in many NLP tasks such as clustering similar words and inferring learning relationships, many challenges and open research questions remain. In this paper, we propose a solution that aligns variations of the same model (or different models) in a joint low-dimensional latent space leveraging carefully generated synthetic data points. This generative process is inspired by the observation that a variety of linguistic relationships is captured by simple linear operations in embedded space. We demonstrate that our approach can lead to substantial improvements in recovering embeddings of local neighborhoods.
TL;DR:Improving consistency and alignment of word embedding models via injection of synthetic data points
Keywords:Natural language processing, Transfer Learning, Unsupervised Learning, Applications
Enter your feedback below and we'll get back to you as soon as possible.