Abstract: While revolutionizing social networks, recommendation systems, and online web services, graph neural networks are vulnerable to adversarial attacks. Recent state-of-the-art adversarial attacks rely on gradient-based meta-learning to selectively perturb a single edge with the highest attack score until they reach the budget constraint. While effective in identifying vulnerable links, these methods are plagued by high computational costs. By leveraging continuous relaxation and parameterization of the graph structure, we propose a novel attack method -- DGA to efficiently generate effective attacks and meanwhile eliminate the need for costly retraining. Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint on different benchmark datasets. Additionally, we provide extensive experimental analyses of the transferability of DGA among different graph models, as well as its robustness against widely-used defense mechanisms.
Loading