MEMFREEZING: TOWARDS PRACTICAL ADVERSARIAL ATTACKS ON TEMPORAL GRAPH NEURAL NETWORKS

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Dynamic Graph, Adversarial Attack, Temporal Graph Neural Network
TL;DR: This paper introduces a practical adversarial attack towards memory-based temporal graph neural networks.
Abstract: Temporal graph neural networks (TGNN) have achieved significant momentum in many real-world dynamic graph tasks, making it urgent to study their robustness against adversarial attacks in real-world scenarios. Existing TGNN adversarial attacks assume that attackers have complete knowledge of the input graphs. However, this is unrealistic in real-world scenarios, where attackers can, at best, access information about existing nodes and edges but not future ones at the time of the attack. However, applying effective attacks with only up-to-attack knowledge is particularly challenging due to the dynamic nature of TGNN input graphs. On the one hand, graph changes after the attacks may diminish the impact of attacks on the affected nodes. On the other hand, targeting nodes that are unseen at the attack time introduces significant challenges. To address these challenges, we introduce a novel adversarial attack framework, MemFreezing, to yield long-lasting and spreading adversarial attacks on TGNNs without the necessity to know knowledge about the post-attack changes in the dynamic graphs. MemFreezing strategically introduces fake nodes or edges to induce nodes' memories into similar and stable states, which we call the `frozen state.' In this state, nodes can no longer sense graph changes or carry information, thereby disrupting predictions. In subsequent updates, these affected nodes maintain and propagate their frozen state with support from their neighboring nodes. The experimental results demonstrate that MemFreezing can persistently decrease the TGNN models' performances in various tasks, delivering more effective attacks under practical setups.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11270
Loading