HGMD: Rethinking Hard Sample Distillation for GNN-to-MLP Knowledge Distillation

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph Knowledge Distillation, Hard Sample Mining
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We revisit the knowledge samples (nodes) in GNNs from the perspective of hardness rather than correctness, and identify that hard sample distillation may be a major performance bottleneck of existing distillation algorithms.
Abstract: To bridge the gaps between powerful Graph Neural Networks (GNNs) and lightweight Multi-Layer Perceptron (MLPs), GNN-to-MLP Knowledge Distillation (KD) proposes to distill knowledge from a well-trained teacher GNN into a student MLP. A counter-intuitive observation is that ``better teacher, better student" does not always hold true for GNN-to-MLP KD, which inspires us to explore what are the criteria for better GNN knowledge samples (nodes). In this paper, we revisit the knowledge samples in teacher GNNs from the perspective of hardness rather than correctness, and identify that hard sample distillation may be a major performance bottleneck of existing KD algorithms. The GNN-to-MLP KD involves two different types of hardness, one student-free knowledge hardness describing the inherent complexity of GNN knowledge, and the other student-dependent distillation hardness describing the difficulty of teacher-to-student distillation. In this paper, we propose a novel Hardness-aware GNN-to-MLP Distillation (HGMD) framework, which models both knowledge and distillation hardness and then extracts a hardness-aware subgraph for each sample separately, where a harder sample will be assigned a larger subgraph. Finally, two hardness-aware distillation schemes (i.e., HGMD-weight and HGMD-mixup) are devised to distill subgraph-level knowledge from teacher GNNs into the corresponding nodes of student MLPs. As non-parametric distillation, HGMD does not involve any additional learnable parameters beyond the student MLPs, but it still outperforms most of the state-of-the-art competitors. For example, HGMD-mixup improves over the vanilla MLPs by 12.95% and outperforms its teacher GNNs by 2.48% averaged over seven real-world datasets and three GNN architectures.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5129
Loading