Abstract: The “pre-train & fine-tune” strategy has gained prominence in Graph Neural Networks (GNNs). During pre-training, the model learns from unlabeled data, and then it is fine-tuned using labeled data for specific tasks. However, full fine-tuning can be inefficient for large-scale models. To address this, we propose WAGT (Weight Adaptive module for Graph Tuning), which uses a ‘weight adaptive module’ inspired by synaptic modulation in the human brain, reducing fine-tuning parameters to just 0.7%. WAGT also includes an optimal transport-based regularizer for effective knowledge transfer. Experiments demonstrate WAGT’s efficiency and superior performance over existing methods.
External IDs:dblp:conf/pakdd/SeongC25
Loading