Keywords: Graph Nerual Network, Graph Condensation, Lightweight Model
Abstract: Graph condensation aims to compress large-scale graph data into a small-scale one, enabling efficient training of graph neural networks (GNNs) while preserving strong test performance and minimizing storage demands. Despite the promising performance of existing graph condensation methods, they still face two-fold challenges, i.e., bi-level optimization inefficiency \& rigid condensed node label design, significantly limiting both efficiency and adaptability. To address such challenges, in this work, we propose a novel approach: Lightweight Graph-Free Condensation with MLP-driven optimization, named LightGFC, which condenses large-scale graph data into a structure-free node set in a simple, accurate, yet highly efficient manner. Specifically, our proposed LightGFC contains three essential stages: (S1) Proto-structural aggregation, which first embeds the structural information of the original graph into a proto-graph-free data through multi-hop neighbor aggregation; (S2) MLP-driven structural-free pretraining, which takes the proto-graph-free data as input to train an MLP model, aligning the structural condensed representations with node labels of the original graph; (S3) Lightweight class-to-node condensation, which condenses semantic and class information into representative nodes via a class-to-node projection algorithm with a lightweight projector, resulting in the final graph-free data. Extensive experiments show that the proposed LightGFC achieves state-of-the-art accuracy across multiple benchmarks while requiring minimal training time (as little as 2.0s), highlighting both its effectiveness and efficiency.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 3703
Loading