Word2Box: Capturing Set-Theoretic Semantics of Words using BoxEmbeddingsDownload PDF


16 Nov 2021, 18:25 (modified: 14 Jan 2022, 11:32)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Learning representations of words in a continuous space is perhaps the most fundamental task in NLP, a prerequisite for nearly all modern machine-learning techniques. Often the objective is to capture distributional similarity via vector dot product, however this is just one relation between word meanings we may wish to capture. It is natural to consider words as (soft) equivalence classes based on similarity, it is natural to expect the ability to perform set-theoretic operations (intersection, union, difference) on these representations. This is particularly relevant for words which are homographs- for example, “tongue”∩“body” should be similar to “mouth”, while “tongue”∩“language” should be similar to “dialect”. Box embeddings are a novel region-based representation which provide the capability to perform these set-theoretic operations. In this work, we provide a fuzzy-set interpretation of box embeddings, and train box embeddings with a CBOW objective where contexts are represented using intersection. We demonstrate improved performance on various word similarity tasks, particularly on less common words, and perform a quantitative and qualitative analysis exploring the additional unique expressivity provided by Word2Box.
0 Replies