Improved Self-Explanatory Graph Learning Method Based on Controlled Information Compression and Branch Optimization
Keywords: Self-explainable machine learning, Graph neural network, Information compression
TL;DR: By compressing noise information rather than directly removing noise structures, the issue of distribution shift is mitigated, thereby enabling high classification accuracy in self-explanatory graph neural networks.
Abstract: Graph Neural Networks have gained widespread application across various domains and have motivated research into their explainability. Self-explainable methods consider inherent explanations during prediction and provide insights to reveal the decision-making processes. However, the transparent explainability of these methods often comes at the cost of predictive performance. One reason is that these methods suffer from a distribution shift when directly using explanation subgraphs to make predictions. In this work, we propose Self-explAinable Graph lEarning (SAGE) to improve the performance of self-explainable methods. Specifically, SAGE learns attention weights for edges to guide message-passing process, generating more meaningful and discriminative representations. In this process, we emphasize label-relevant critical structures while diminishing the influence of noisy ones. Additionally, we control the degree of noisy information compression applied to the subgraphs by establishing a lower bound for the attention scores of irrelevant noisy structures, which helps reduce the deviation from the original graph and mitigates the distribution shift. Furthermore, we introduced an optional strategy called branch optimization, exploring the optimal GNN state to improve the model's optimization effectiveness. Experimental results on real-world datasets demonstrate that SAGE can achieve predictive accuracy comparable to or even higher than baselines. Compared to the backbone model, our self-explainable framework attains an average performance improvement of 10.5% across four datasets.
Submission Number: 1
Loading