Embedding Model with Attention over Convolution Kernels and Dynamic Mapping Matrix for Link Prediction

Published: 01 Jan 2022, Last Modified: 08 Jul 2024ACIIDS (1) 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Knowledge Graph Completion, especially its sub-task link prediction attracts the attention of the research community and industry because of its applicability as a premise for developing several potential applications. Knowledge graph embedding (KGE) shows promising results to solve this problem. This paper focuses on the neural networks-based approach for KGE, which can extract features from the graphs better than other groups of embedding methods. The ConvE model is the first work using 2D convolution over embeddings and stacking multiple nonlinear feature layers to model knowledge graphs. However, its computation is inefficient and does not preserve translation between entity and relation embedding. Therefore, dynamic convolution was designed to solve limited representation capability issues and show the promised performance. This work introduces a mixture model that incorporates attention into performing the convolutional operation on projection embeddings. The TransD idea is used to project entity embedding from entity space to relation space. Then, it is stacked with relation embedding to perform dynamic convolution over stacked embedding without reshaping, following the idea that comes from Conv-TransE. So the translational property between the entity and the relation is preserved, and their diversity is considered. We experimented on benchmark datasets and showed how our proposed model is better than baseline models in terms of MR, MRR, and Hits@K.
Loading