Distributional Inclusion Hypothesis and Quantifications: Probing Hypernymy in Functional Distributional SemanticsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 14 Dec 2023CoRR 2023Readers: Everyone
Abstract: Functional Distributional Semantics (FDS) models the meaning of words by truth-conditional functions. This provides a natural representation for hypernymy, but no guarantee that it is learnt when FDS models are trained on a corpus. We demonstrate that FDS models learn hypernymy when a corpus strictly follows the Distributional Inclusion Hypothesis. We further introduce a training objective that allows FDS to handle simple universal quantifications, thus enabling hypernymy learning under the reverse of DIH. Experimental results on both synthetic and real data sets confirm our hypotheses and the effectiveness of our proposed objective.
0 Replies

Loading