EGNN: Constructing explainable graph neural networks via knowledge distillation

Published: 2022, Last Modified: 26 Aug 2024Knowl. Based Syst. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•Simultaneously optimizing multiple objectives of distilling knowledge from the pretrained model and cross-entropy loss, the proposed EGNN model exhibits superior performance.•The transparent and interpretable neighbor selection strategy is designed. The neighbor nodes are selected in a sliced layer way, thus the neighbors’ contributions of user’s representation are traceable.•Four state-of-the-art GNN models are selected as teacher networks. Experimental results on three real-world datasets show the effectiveness of the proposed framework.
Loading