Bridge Distributed Knowledge and Pre-trained Language Models for Knowledge Graph CompletionDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
TL;DR: Leverage the distributed knowledge principle by using PLMs for knowledge graph completion
Abstract: Knowledge graph completion (KGC) is a task of inferring missing triples based on existing Knowledge Graphs (KGs). Both distributed and semantic information are vital for successful KGC. However, existing methods only use either the distributed knowledge from the KG embeddings or the semantic information from pre-trained language models (PLMs), leading to suboptimal model performance. Moreover, since PLMs are not trained on KGs, directly using PLMs to encode triples is inappropriate. To overcome these limitations, we propose a novel model called \our, which jointly encodes distributed and semantic information of KGs. Specifically, we strategically encode entities and relations separately by PLMs to better utilize the semantic knowledge of PLMs and enable distributed representation learning via a distributed learning principle. Furthermore, to bridge the gap between KGs and PLMs, we employ a self-supervised representation learning method called BYOL to fine-tune PLMs with two different views of a triple. Experiments demonstrate that \our outperforms the SOTA models on three benchmark datasets.
Paper Type: long
Research Area: NLP Applications
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: english
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview