Abstract: Pre-trained language models contain a vast amount of linguistic information as well as knowledge about the structure of the world. Both of these attributes are extremely beneficial for automatic enrichment of semantic graphs, such as knowledge bases and lexical-semantic databases. In this article, we employ generative language models to predict descendants of existing nodes in lexical data structures based on IS-A relations, such as WordNet. To accomplish this, we conduct experiments utilizing diverse formats of artificial text input containing information from lexical taxonomy for the English and Russian languages. Our findings demonstrate that the incorporation of data from the knowledge graph into a text input significantly affects the quality of hyponym prediction.
Loading