Imperceptible Adversarial Attacks on Discrete-Time Dynamic Graph ModelsDownload PDF

24 Sept 2022 (modified: 05 May 2023)TGL@NeurIPS2022 LongPaperReaders: Everyone
Keywords: Graph Neural Networks, Dynamic Graphs, Adversarial Attacks, Imperceptibility
TL;DR: Introduces a novel constraint to imperceptibly attack dynamic graph models and presents an effective approach to find perturbations under that constraint
Abstract: Real-world graphs such as social networks, communication networks, and rating networks are constantly evolving over time. Many architectures have been de,veloped to learn effective node representations using both graph structure and its dynamics. While the robustness of static graph models is well-studied, the vulnerability of the dynamic graph models to adversarial attacks is underexplored. In this work, we design a novel attack on discrete-time dynamic graph models where we desire to perturb the input graph sequence in a manner that preserves the structural evolution of the graph. To this end, we motivate a novel Time-Aware Perturbation (TAP) constraint, which ensures that perturbations introduced at each time step are restricted to only a small fraction of the number of changes in the graph since the previous time step. We present a theoretically-grounded Projected Gradient Descent approach for dynamic graphs to find the effective perturbations under the TAP constraint. Experiments on dynamic link prediction show that our approach is up to 4x more effective than the baseline methods for attacking these models under the novel constraint. Dyn-PGD also outperforms the existing baselines on the node classification task by up to 6x and is 2x more efficient in running time than the best-performing baseline
Paper Format: full paper (8 pages)
0 Replies