GNNUpdater: Adaptive Self-Triggered Training Framework on Dynamic Graphs

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Update-Timing Strategy, Streaming Graphs
TL;DR: GNNUpdater automatically decides when to fine-tune GNNs on streaming graphs by predicting performance degradation from embedding shifts and global structure, cutting needless updates while preserving accuracy.
Abstract: Adapting Graph Neural Networks (GNNs) to evolving, dynamic graph data presents a significant operational challenge. A critical yet understudied question is determining **when** to update these models to balance model freshness against computational training costs. This problem is particularly difficult in graph settings due to two key issues: **label delay**, where ground truth arrives long after predictions are made, and **hidden drift**, where structural dependencies propagate changes through multiple hops, causing unexpected performance degradation. We propose GNNUpdater, an adaptive framework that decides when to trigger GNN training. It overcomes the aforementioned challenges through two innovations: (1) a performance predictor that estimates model quality by measuring shifts in node embeddings, eliminating dependence on immediate ground-truth labels, and (2) a graph-aware update trigger that uses label propagation to detect widespread performance degradation across the graph. We implement GNNUpdater as a high-performance distributed streaming-GNN library for billion-edge dynamic graphs. Extensive experiments demonstrate that GNNUpdater either exceeds the performance of periodic, performance-based, and drift-detection baselines at comparable training cost or matches their performance with significantly reduced computational effort. The implementation can be found in the anonymous link: https://anonymous.4open.science/r/GNNUpdater-B47D/.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 12297
Loading