Adversarial Robustness of Continuous Time Dynamic Graphs

ICLR 2026 Conference Submission13055 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial Attacks, Temporal Graph Learning, Graph Neural Network
TL;DR: We introduce Temporally-aware Randomized Block Coordinate Descent (TR-BCD), a novel gradient-based evasion attack framework for continuous-time dynamic graphs to test TGNN vulnerabilities.
Abstract: Real-world relations are dynamic and often modeled as temporal graphs, making Temporal Graph Neural Networks (TGNNs) crucial for applications like fraud detection, cybersecurity, and social network analysis. However, our study reveals critical vulnerabilities in these models through three types of adversarial attacks: structural, contextual, and temporal perturbations. We introduce Temporally-aware Randomized Block Coordinate Descent (TR-BCD), a novel gradient-based evasion attack framework for continuous-time dynamic graphs. Unlike previous approaches that rely on heuristics or require training data access, TR-BCD optimizes adversarial edge selection through continuous relaxation while maintaining realistic temporal patterns. Through extensive experiments on six temporal networks, we demonstrate that TGNNs are highly vulnerable to TR-BCD attacks, reducing Mean Reciprocal Rank (MRR) by up to 53% while perturbing only 5% of edges. Our attacks are highly effective against state-of-the-art models, including TGN and TNCN, highlighting the importance of studying adversarial robustness for temporal graph learning methods.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 13055
Loading