Reinforcement Learning Assisted Dynamic Large Scale Graph Learning

Published: 04 Oct 2025, Last Modified: 10 Oct 2025DiffCoAlg 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Network, Reinforcement Learning
TL;DR: RL Assisted Dynamic Large Scale GNN Learning
Abstract: Graph Neural Networks (GNNs) have proven to be highly effective for link and edge prediction across domains ranging from social networks to drug discovery. However, processing extremely large graphs with millions of densely connected nodes poses significant challenges in terms of computational efficiency, learning speed, and memory management. Thus making Graph Foundational Model very computationally expensive. In this work, we present a reinforcement learning (RL) assisted dynamic graph learning algorithm that addresses these scalability issues, making Graph Foundational Model computationally feasible for many use cases. Our approach provides new perspectives in Advanced Graph Machine Learning by employing an RL agent to strategically sparsify large graphs by preserving only the most salient edges for downstream applications like node classification. We demonstrate the effectiveness of our framework on an academic network containing papers, authors, and their affiliations. Our method first partitions the network into two components: a core graph of papers and a satellite graph of authors and affiliations. The RL agent then selectively merges these components by identifying and maintaining only the most informative connections between papers and authors for the node classification task. Experimental results show that our approach achieves comparable performance to baseline methods while reducing memory requirements and accelerating the learning process.
Submission Number: 27
Loading