Keywords: KGE explanability, probabilistic reasoning, embedded representations, relational learning
TL;DR: Presenting a class of NeSy models to post-hoc explain KGE black boxes in link prediction tasks
Track: Neurosymbolic Methods for Trustworthy and Interpretable AI
Abstract: Knowledge Graph Embedding (KGE) models have shown remarkable performances in the knowledge graph completion task, thanks to their ability to capture and represent complex relational patterns. Indeed, modern KGEs encompass different inductive biases, which can account for relational patterns like reasoning compositional chains, symmetries, anti-symmetries, hierarchical patterns, etc.
However, KGE models inherently lack interpretability, as their generalization capabilities are purely focused on mapping human interpretable units of information, like constants and predicates, into vector embeddings in a dense latent space, which is completely opaque to a human operator.
On the other hand, different Neural-Symbolic (NeSy) methods have shown competitive results in knowledge completion tasks, but their focus on achieving high accuracy often leads to sacrificing interpretability. Many existing NeSy approaches, while inherently interpretable, resort to blending their predictions with opaque KGEs to boost performance, ultimately diminishing their explanatory power.
This paper introduces a novel approach to address this limitation by applying a post-hoc NeSy method to KGE models. This strategy ensures both high fidelity to KGE models and the inherent interpretability of NeSy approaches. The proposed framework defines NeSy reasoners that generate explicit logic proofs using predefined or learned rules, ensuring transparent and explainable predictions. We evaluate the methodology using both accuracy and explainability-based metrics, demonstrating the effectiveness of our approach.
Paper Type: Long Paper
Software: https://github.com/rodrigo-castellano/KGE-Distillation
Submission Number: 55
Loading