LP-DIXIT: Evaluating Explanations for Link Prediction on Knowledge Graphs using Large Language Models

Published: 29 Jan 2025, Last Modified: 29 Jan 2025WWW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: Semantics and knowledge
Keywords: Knowledge Graphs, Large Language Models, Link Prediction, Explanation
TL;DR: We propose LP-DIXIT to algorithmically evaluate explanations of link predictions by determining forward simulatability variation and adopting large language models to mimic users.
Abstract: Link prediction methods predict missing facts in incomplete knowledge graphs, often using embeddings to enhance scalability. However, embeddings complicate explainability, which is crucial for users' understanding of inferences in many domains. Methods emerged to explain predictions by identifying supporting portions of knowledge. To evaluate explanations from a user perspective, they can be compared to those in benchmarks, though they are limited to simplistic graphs. In contrast, user studies on forward simulatability variation measure how explanations improve predictability, i.e., the user ability to predict the results of inferences, which is key to trust. However, user studies face scalability and reproducibility issues on large graphs. Recognizing these gaps, we propose LP-DIXIT to algorithmically evaluate explanations of link predictions by determining forward simulatability variation and adopting large language models to mimic users, as is done in other domains, e.g., in evaluating other approaches on language related tasks. We experimentally prove that LP-DIXIT evaluates as effective explanations those in benchmarks, and we adopt it to compare state-of-the-art explanation methods.
Submission Number: 1880
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview