Keywords: graph neural networks (GNNs), graph condensation, training trajectory meta-matching, graph neural feature score
Abstract: Graph condensation, which reduces the size of a large-scale graph by synthesizing a small-scale condensed graph as its substitution, has immediate benefits for various graph learning tasks.
However, existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph, and overlook critical issues in effectiveness and generalization ability.
In this paper, we advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set without explicit graph structures, i.e., graph-free data.
Our idea is to implicitly encode topology structure information into the node attributes in the synthesized graph-free data, whose topology is reduced to an identity matrix.
Specifically, SFGC contains two collaborative components:
(1) a training trajectory meta-matching scheme for effectively synthesizing small-scale graph-free data;
(2) a graph neural feature score metric for dynamically evaluating the quality of the condensed data.
Through training trajectory meta-matching, SFGC aligns the long-term GNN learning behaviors between the large-scale graph and the condensed small-scale graph-free data, ensuring comprehensive and compact transfer of informative knowledge to the graph-free data.
Afterward, the underlying condensed graph-free data would be dynamically evaluated with the graph neural feature score, which is a closed-form metric for ensuring the excellent expressiveness of the condensed graph-free data.
Extensive experiments verify the superiority of SFGC across different condensation ratios.
Supplementary Material: pdf
Submission Number: 5355
Loading