KGEx: Explaining Knowledge Graph Embeddings Via Subgraph Sampling and Knowledge Distillation

Published: 18 Nov 2023, Last Modified: 29 Nov 2023LoG 2023 PosterEveryoneRevisionsBibTeX
Keywords: knowledge graph embeddings, explainable AI
TL;DR: A post-hoc explanation method for neural link predictors based on subgraph sampling and knowledge distillation
Abstract: Despite being the go-to choice for link prediction on knowledge graphs, research on interpretability of knowledge graph embeddings (KGE) has been relatively unexplored. We present KGEx, a novel post-hoc method that explains individual link predictions by drawing inspiration from surrogate models research. Given a target triple to predict, KGEx trains surrogate KGE models that we use to identify important training triples. To gauge the impact of a training triple, we sample random portions of the target triple neighborhood and we train multiple surrogate KGE models on each of them. To ensure faithfulness, each surrogate is trained by distilling knowledge from the original KGE model. We then assess how well surrogates predict the target triple being explained, the intuition being that those leading to faithful predictions have been trained on ``impactful'' neighborhood samples. Under this assumption, we then harvest triples that appear frequently across impactful neighborhoods. We conduct extensive experiments on two publicly available datasets, to demonstrate that KGEx is capable of providing explanations faithful to the black-box model.
Submission Type: Full paper proceedings track submission (max 9 main pages).
Software: https://github.com/Accenture/AmpliGraph
Poster: jpg
Poster Preview: jpg
Submission Number: 125
Loading