Ranking vs. Classifying: Measuring Knowledge Base Completion QualityDownload PDF

Published: 01 May 2020, Last Modified: 15 Sept 2024AKBC 2020Readers: Everyone
Keywords: knowledge base completion, knowledge graph embedding, classification, ranking
TL;DR: We publish a new evaluation benchmark for knowledge graph completion methods where ranking is replaced with actual classification, and show one way to improve knowledge graph embedding models in this new setting.
Subject Areas: Knowledge Representation, Semantic Web and Search, QuestionAnswering and Reasoning, Applications
Abstract: Knowledge base completion (KBC) methods aim at inferring missing facts from the information present in a knowledge base (KB). Such a method thus needs to estimate the likelihood of candidate facts and ultimately to distinguish between true facts and false ones to avoid compromising the KB with untrue information. In the prevailing evaluation paradigm, however, models do not actually decide whether a new fact should be accepted or not but are solely judged on the position of true facts in a likelihood ranking with other candidates. We argue that consideration of binary predictions is essential to reflect the actual KBC quality, and propose a novel evaluation paradigm, designed to provide more transparent model selection criteria for a realistic scenario. We construct the data set FB14k-QAQ with an alternative evaluation data structure: instead of single facts, we use KB queries, i.e., facts where one entity is replaced with a variable, and construct corresponding sets of entities that are correct answers. We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB. This way, we can explicitly measure a model’s ability to handle queries that have more correct answers in the real world than in the KB, including the special case of queries without any valid answer. The latter especially contrasts the ranking setting. We evaluate a number of state-of-the-art KB embeddings models on our new benchmark. The differences in relative performance between ranking-based and classification-based evaluation that we observe in our experiments confirm our hypothesis that good performance on the ranking task does not necessarily translate to good performance on the actual completion task. Our results motivate future work on KB embedding models with better prediction separability and, as a first step in that direction, we propose a simple variant of TransE that encourages thresholding and achieves a significant improvement in classification F 1 score relative to the original TransE.
Archival Status: Archival
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/ranking-vs-classifying-measuring-knowledge/code)
9 Replies

Loading