Improve the translational distance models for knowledge graph embedding

Published: 01 Jan 2020, Last Modified: 18 May 2025J. Intell. Inf. Syst. 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Knowledge graph embedding techniques can be roughly divided into two mainstream, translational distance models and semantic matching models. Though intuitive, translational distance models fail to deal with the circle structure and hierarchical structure in knowledge graphs. In this paper, we propose a general learning framework named TransX-pa, which takes various models (TransE, TransR, TransH and TransD) into consideration. From this unified viewpoint, we analyse the learning bottlenecks are: (i) the common assumption that the inverse of a relation r is modelled as its opposite − r; and (ii) the failure to capture the rich interactions between entities and relations. Correspondingly, we introduce position-aware embeddings and self-attention blocks, and show that they can be adapted to various translational distance models. Experiments are conducted on different datasets extracted from real-world knowledge graphs Freebase and WordNet in the tasks of both triplet classification and link prediction. The results show that our approach makes a great improvement, showing a better, or comparable, performance with state-of-the-art methods.
Loading