Reducing Resource Usage for Continuous Model Updating and Predictive Query Answering in Graph Streams
Abstract: We observe the need for continuous, online training of dynamic graph neural network (DGNN) models while at the same time using them to answer continuous predictive queries as data streams in. This implies significant training-time and memory costs. Along with the DGNN model learning, we simultaneously learn a weight/priority distribution over the nodes via a randomized online algorithm. In turn, the DGNN is continuously trained/learned by sampling nodes from the learned distribution and performing the chosen nodes' partitions of training work. We also devise a novel graph Kernel Density Estimation technique to smooth the distribution and improve the learning quality. Our experiments show that continuous online learning is much needed for graph streams and our approach significantly improves the standard DGNN models-to achieve the same accuracy, the training time ranges from several times to two orders of magnitude shorter, and the maximum memory consumption is several times to 20 times smaller.
Loading