Pretrain-KGEs: Learning Knowledge Representation from Pretrained Models for Knowledge Graph EmbeddingsDownload PDF

Zhiyuan Zhang, Xiaoqian Liu, Yi Zhang, Qi Su, Xu Sun, Bin He

01 Dec 2019 (modified: 23 Oct 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: Knowledge graph embedding learning, Pretrained language model, Bert, Pretraining
TL;DR: We propose to learn knowledgeable entity and relation representations from Bert for knowledge graph embeddings.
Abstract: Learning knowledge graph embeddings (KGEs) is an efficient approach to knowledge graph completion. Conventional KGEs often suffer from limited knowledge representation, which causes less accuracy especially when training on sparse knowledge graphs. To remedy this, we present Pretrain-KGEs, a training framework for learning better knowledgeable entity and relation embeddings, leveraging the abundant linguistic knowledge from pretrained language models. Specifically, we propose a unified approach in which we first learn entity and relation representations via pretrained language models and use the representations to initialize entity and relation embeddings for training KGE models. Our proposed method is model agnostic in the sense that it can be applied to any variant of KGE models. Experimental results show that our method can consistently improve results and achieve state-of-the-art performance using different KGE models such as TransE and QuatE, across four benchmark KG datasets in link prediction and triplet classification tasks.
0 Replies

Loading