MiNT: Multi-Network Transfer Benchmark for Temporal Graph Learning

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Temporal graph learning, Transfer learning, Graph neural networks, Temporal multi-network training
TL;DR: We introduce the first benchmark of 84 real-world temporal graphs and MiNT, the first multi-network temporal model pre-trained on this collection, demonstrating strong transferability to unseen token networks.
Abstract: Temporal Graph Learning (TGL) aims to discover patterns in evolving networks or temporal graphs and leverage these patterns to predict future interactions. However, most existing research focuses on learning from a single network in isolation, leaving the challenges of within-domain and cross-domain generalization largely unaddressed. In this study, we introduce a new benchmark of 84 real-world temporal transaction networks and propose **Temporal Multi-network Transfer (MiNT)**, a pre-training framework designed to capture transferable temporal dynamics across diverse networks. We train MiNT models on up to 64 transaction networks and evaluate their generalization ability on 20 held-out, unseen networks. Our results show that MiNT consistently outperforms individually trained models, revealing a strong relation between the number of pre-training networks and transfer performance. These findings highlight scaling trends in temporal graph learning and underscore the importance of network diversity in improving generalization. This work establishes the first large-scale benchmark for studying transferability in TGL and lays the groundwork for developing Temporal Graph Foundation Models. Our code is available at \url{https://github.com/benjaminnNgo/ScalingTGNs}
Croissant File: json
Dataset URL: https://zenodo.org/records/15364297
Code URL: https://github.com/benjaminnNgo/ScalingTGNs
Supplementary Material: pdf
Primary Area: Datasets & Benchmarks illustrating Different Deep learning Scenarios (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 808
Loading