A Contextualized BERT model for Knowledge Graph Completion

NeurIPS 2024 Workshop MusIML Submission6 Authors

Published: 30 Nov 2024, Last Modified: 07 Dec 2024MusIML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge Graphs, Knowledge Graph Completion, Tail entity prediction, Link prediction
TL;DR: Contextualized BERT for Knowledge Graph Completion leverages neighboring entity and relationship context to predict missing links, outperforming traditional and LLM-based models while reducing computational demands.
Abstract: Knowledge graphs (KGs) are valuable for representing structured, interconnected information across domains, enabling tasks like semantic search, recommendation systems and inference. A pertinent challenge with KGs, however, is that many entities (i.e., heads, tails) or relationships are unknown. Knowledge Graph Completion (KGC) addresses this by predicting these missing nodes or links, enhancing the graph's informational depth and utility. Traditional methods like TransE and ComplEx predict tail entities but struggle with unseen entities. Textual-based models leverage additional semantics but come with high computational costs, semantic inconsistencies, and data imbalance issues. Recent LLM-based models show improvement but overlook contextual information and rely heavily on entity descriptions. In this study, we introduce a contextualized BERT model for KGC that overcomes these limitations by utilizing the contextual information from neighbouring entities and relationships to predict tail entities. Our model eliminates the need for entity descriptions and negative triplet sampling, reducing computational demands while improving performance. Our model outperforms state-of-the-art methods on standard datasets, improving Hit@1 by 5.3\% and 4.88\% on FB15k-237 and WN18RR respectively, setting a new benchmark in KGC.
Submission Number: 6
Loading