Abstract: Recent studies have shown that using Graph Transformer and attention weights are often claimed to confer explicability, purportedly helpful in providing insights and explaining why a model makes its decisions. While there is already quite an extensive range of techniques explaining Graph Neural Networks, their explicability and model transparency could be improved. A significant challenge in Explainable AI has recently been correctly interpreting neuron behavior to identify what a deep learning system has internally detected as relevant to the input. To tackle these challenges, we present a knowledge attribution method for the link prediction task to identify the neurons that express the input Knowledge Graph (KG) triples. This method not only enhances the explicability of the model but also improves its transparency, providing a clearer understanding of how specific factual knowledge is stored. It enables human-centric and knowledge attribution explanations by extracting factual knowledge from identified decision drivers. Empirical results on two standard KG-based link prediction datasets shed light on understanding the storage of knowledge within Graph Transformer architecture.
External IDs:dblp:conf/sofsem/MikaBWN25
Loading