IterefinE: Iterative KG Refinement Embeddings using Symbolic KnowledgeDownload PDF

Published: 01 May 2020, Last Modified: 22 Oct 2023AKBC 2020Readers: Everyone
Keywords: Knowledge graph refinement, embeddings, inference
TL;DR: We investigate the combined use of ontologies and embeddings in KG refinement task.
Subject Areas: Knowledge Representation, Semantic Web and Search, Relational AI
Abstract: Knowledge Graphs (KGs) extracted from text sources are often noisy and lead to poor performance in downstream application tasks such as KG-based question answering. While much of the recent activity is focused on addressing the sparsity of KGs by using embeddings for inferring new facts, the issue of cleaning up of noise in KGs through KG refinement task is not as actively studied. Most successful techniques for KG refinement make use of inference rules and reasoning over ontologies. Barring a few exceptions, embeddings do not make use of ontological information, and their performance in KG refinement task is not well understood. In this paper, we present a KG refinement framework called IterefinE which iteratively combines the two techniques – one which uses ontological information and inferences rules, viz.,PSL-KGI, and the KG embeddings such as ComplEx and ConvE which do not. As a result, IterefinE is able to exploit not only the ontological information to improve the quality of predictions, but also the power of KG embeddings which (implicitly) perform longer chains of reasoning. The IterefinE framework, operates in a co-training mode and results in explicit type-supervised embeddings of the refined KG from PSL-KGI which we call as TypeE-X. Our experiments over a range of KG benchmarks show that the embeddings that we produce are able to reject noisy facts from KG and at the same time infer higher quality new facts resulting in upto 9% improvement of overall weighted F1 score.
Archival Status: Archival
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2006.04509/code)
11 Replies

Loading