Abstract: Semantic similarity between words has now become a popular research problem to tackle in natural language processing (NLP) field. Word embedding have been demonstrated progress in measuring word similarity recently. However, limited to the distributional hypothesis, basic embedding methods generally have drawbacks in nature. One of the limitations is that word embeddings are usually by predicting a target word in its local context, leading to only limited information being captured. In this paper, we propose a novel transferred vectors approach to compute word semantic similarity. Transferred vectors are obtained via a reasonable combination of the source word and its nearest neighbors on semantic level. We conduct experiments on popular both English and Chinese benchmarks for measuring word similarity. The experiment results demonstrate that our method outperforms previous state-of-the-art by a large margin.
Loading