MemFreezing: A Novel Adversarial Attack on Temporal Graph Neural Networks under Limited Future Knowledge
TL;DR: We introduce a novel adversarial attack towards temporal graph neural network, which enables attackers to conduct attacks without knowing post-attack graph changes in dynamic graphs.
Abstract: Temporal graph neural networks (TGNN) have achieved significant momentum in many real-world dynamic graph tasks.
While most existing TGNN attack methods assume worst-case scenarios where attackers have complete knowledge of the input graph, the assumption may not always hold in real-world situations, where attackers can, at best, access information about existing nodes and edges but not future ones after the attack.
However, studying adversarial attacks under these constraints is crucial, as limited future knowledge can reveal TGNN vulnerabilities overlooked in idealized settings.
Nevertheless, designing effective attacks in such scenarios is challenging: the evolving graph can weaken their impact and make it hard to affect unseen nodes.
To address these challenges, we introduce MemFreezing, a novel adversarial attack framework that delivers long-lasting and spreading disruptions in TGNNs without requiring post-attack knowledge of the graph.
MemFreezing strategically injects fake nodes or edges to push node memories into a stable “frozen state,” reducing their responsiveness to subsequent graph changes and limiting their ability to convey meaningful information.
As the graph evolves, these affected nodes maintain and propagate their frozen state through their neighbors.
Experimental results show that MemFreezing persistently degrades TGNN performance across various tasks, offering a more enduring adversarial strategy under limited future knowledge.
Lay Summary: AI models like TGNNs are highly accurate for tasks involving dynamic graphs (e.g., social networks, traffic prediction), but their reliability can be compromised by adversarial attacks—malicious manipulations of input data. Unlike traditional attacks, targeting dynamic graphs is harder because attackers have limited knowledge on future changes.
We discovered a new threat: attackers can mislead TGNNs by preventing them from detecting changes in dynamic graphs. By injecting small amounts of fake data, we show how TGNNs can be tricked into ignoring real updates, leading to incorrect predictions.
Our work highlights a critical vulnerability in TGNNs, emphasizing the need for defenses that ensure these models remain sensitive to real-time changes in dynamic graphs. This is key for maintaining trustworthy AI in real-world applications.
Primary Area: Social Aspects->Security
Keywords: Adversarial Attack, Graph Neural Network, Dynamic Graph
Submission Number: 7369
Loading