Keywords: Data Knowledge Distillation, Graph Neural Network, Directed Graph Learning
TL;DR: The first attempt to fully utilize the potential of data to empower directed graph learning through data-centric machine learning.
Abstract: The directed graph (digraph), as a generalization of undirected graphs, exhibits superior representation capability in modeling complex topology systems and has garnered considerable attention in recent years. Despite the notable efforts made by existing DiGraph Neural Networks (DiGNNs) to leverage directed edges, they still fail to comprehensively delve into the abundant data knowledge concealed in the digraphs. This limitation results in sub-optimal performance and underscores the necessity of further exploring the potential correlations between the directed topology and node profiles from a data-centric perspective, thereby empowering model-centric neural networks with stronger encoding capabilities. In this paper, we propose \textbf{E}ntropy-driven \textbf{D}igraph knowl\textbf{E}dge distillatio\textbf{N} (EDEN), which can serve as a new data-centric digraph learning paradigm or a model-agnostic hot-and-plug data online knowledge distillation module for most existing DiGNNs to fully leverage informative digraphs. Specifically, EDEN first utilizes directed structural measurements from a topological perspective to construct a knowledge tree, guided by the hierarchical encoding theory. Subsequently, EDEN quantifies the mutual information of nodes from a feature perspective to further refine the knowledge flow, facilitating tree layer-wise knowledge distillation. As a general framework, EDEN also can naturally extend to undirected scenarios and demonstrate satisfactory performance. In our experiments, EDEN has been widely evaluated on 14 (di)graph datasets and across 4 downstream tasks. The results demonstrate that EDEN attains SOTA performance and exhibits strong improvement for prevalent (Di)GNNs.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5375
Loading