Beyond Bilingual: Multi-sense Word Embeddings using Multilingual ContextDownload PDF

23 Nov 2024 (modified: 22 Oct 2023)Submitted to ICLR 2017Readers: Everyone
Abstract: Word embeddings, which represent a word as a point in a vector space, have become ubiquitous to several NLP tasks. A recent line of work uses bilingual (two languages) corpora to learn a different vector for each sense of a word, by exploiting crosslingual signals to aid sense identification. We present a multi-view Bayesian non-parametric algorithm which improves multi-sense word embeddings by (a) using multilingual (i.e., more than two languages) corpora to significantly improve sense embeddings beyond what one achieves with bilingual information, and (b) uses a principled approach to learn a variable number of senses per word, in a data-driven manner. Ours is the first approach with the ability to leverage multilingual corpora efficiently for multi-sense representation learning. Experiments show that multilingual training significantly improves performance over monolingual and bilingual training, by allowing us to combine different parallel corpora to leverage multilingual context. Multilingual training yields com- parable performance to a state of the art monolingual model trained on five times more training data.
TL;DR: Using multilingual context for learning multi-sense embeddings helps.
Keywords: Natural language processing
Conflicts: microsoft.com, virginia.edu, illinois.edu, stanford.edu
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1706.08160/code)
11 Replies

Loading