Do Embeddings Actually Capture Knowledge Graph Semantics?Download PDF

Published: 23 Feb 2021, Last Modified: 05 May 2023ESWC 2021 ResearchReaders: Everyone
Keywords: knowledge graph embeddings, semantic representation, entity similarity
Abstract: Knowledge graph embeddings that generate vector space representations of knowledge graph triples, have gained considerable popularity in past years. Several embedding models have been proposed that achieve state-of-the-art performance for the task of triple completion in knowledge graphs. Relying on the presumed semantic capabilities of the learned embeddings, they have been leveraged for various other tasks such as entity typing, rule mining and conceptual clustering. However, a critical analysis of the utility as well as limitations of these embeddings for semantic representation of the underlying entities and relations has not been performed by previous work. In this paper, we performed a systematic evaluation of popular knowledge graph embedding models to obtain a better understanding of their semantic capabilities as compared to a non-embedding based approach. Our analysis brings attention to the fact that semantic representation in the knowledge graph embeddings is not universal, but restricted to a subset of the entities based on dataset characteristics. We provide further insights into the reasons for this behavior. The results of our experiments indicate that careful analysis of benefits of the embeddings needs to be performed when employing them for semantic tasks.
Subtrack: Knowledge Graphs (understanding, creating, and exploiting)
Negative Results Paper: This is a negative results paper.
First Author Is Student: Yes
12 Replies

Loading