Abstract: Highlights•Identify the key issues that constrain performance of existing GNN knowledge distillation methods.•The first attempt of exploring GNN knowledge distillation problem from causal perspective.•Propose a novel general framework to learn MLPs on graphs with both exceptional performance and outstanding stability.•Show the validity and superiority of the proposed model in experiments.
Loading