Distributional Models and Deep Learning Embeddings: Combining the Best of Both Worlds

Irina Sergienya, Hinrich Schütze

Dec 20, 2013 (modified: Dec 20, 2013) ICLR 2014 workshop submission readers: everyone
  • Decision: submitted, no decision
  • Abstract: There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.

Loading