gMLP-KGE: a simple but efficient MLPs with gating architecture for link prediction

Published: 01 Jan 2024, Last Modified: 20 Feb 2025Appl. Intell. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Most existing knowledge graphs (KGs) suffer from incompleteness, which will be detrimental to a variety of downstream applications. Link prediction is the task of predicting missing links in the KGs and can effectively address the issue of incompleteness by knowledge graph embedding (KGE). ConvE, a relatively popular KGE model based on convolutional neural networks, has shown superiority in link prediction. Some subsequent extension models of ConvE achieve state-of-the-art performance by increasing complexity and training time, which result in a high risk of overfitting and a limited performance due to the large number of parameters concentrated in the fully connected projection layer. To address these challenges, we for the first time innovatively introduce and extend a recently simple network architecture gMLP (based on multi-layer perceptrons MLPs with gating) in vision applications for link prediction. We propose a simple and efficient model called gMLP-KGE, which consists of an embedding layer, an input layer, an extended gMLP layer, and an output layer. Extensive experiments show that the number of parameters of gMLP-KGE is close to that of ConvE and less than other extension models, while gMLP-KGE consistently performs well on seven datasets of different scales under most evaluation metrics.
Loading