Efficient Graph Continual Learning via Lightweight Graph Neural Tangent Kernels-based Dataset Distillation
Abstract: Graph Neural Networks (GNNs) have emerged as a fundamental tool for modeling complex graph structures across diverse applications.
However, directly applying pretrained GNNs to varied downstream tasks without fine-tuning-based continual learning remains challenging, as this approach incurs high computational costs and hinders the development of Large Graph Models (LGMs).
In this paper, we investigate an efficient and generalizable dataset distillation framework for Graph Continual Learning (GCL) across multiple downstream tasks, implemented through a novel Lightweight Graph Neural Tangent Kernel (LIGHTGNTK).
Specifically, LIGHTGNTK employs a low-rank approximation of the Laplacian matrix via Bernoulli sampling and linear association within the GNTK. This design enables efficient capture of both structural and feature relationships while supporting gradient-based dataset distillation.
Additionally, LIGHTGNTK incorporates a unified subgraph anchoring strategy, allowing it to handle graph-level, node-level, and edge-level tasks under diverse input structures.
Comprehensive experiments on several datasets show that LIGHTGNTK achieves state-of-the-art performance in GCL scenarios, promoting the development of adaptive and scalable LGMs.
Lay Summary: Graphs are often be used to understand complex relationships, like how people connect in social networks or how molecules bond in chemistry. Graph Neural Networks (GNNs) are powerful models designed to process this kind of graph data and help make predictions or find patterns. However, making these GNNs work well on different tasks usually means retraining them from scratch, which takes lots of time and computing resources.
To tackle this challenge, we developed a new, efficient method that lets GNNs quickly adapt to new jobs without having to start over on the entire training dataset each time. Our approach, called LIGHTGNTK, cleverly selects important parts of large graphs so the GNN model can achieve impressive performance with limited graph samples. By looking at both the structure and the features of the graphs from the perspective of the subgraph, our method can easily switch between tasks that focus on whole graphs, single points, or the connections between nodes.
We tested LIGHTGNTK on a variety of real-world problems, and it performed better than current techniques most of the time. This means our work could make it much easier and cheaper to build adaptable, large-scale graph-based AI tools for science, industry, and beyond.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Neural Tangent Kernel, Graph Neural Networks, Graph Continual Learning
Submission Number: 9134
Loading