Graph Robustness Benchmark: Rethinking and Benchmarking Adversarial Robustness of Graph Neural NetworksDownload PDF

08 Jun 2021 (modified: 24 May 2023)Submitted to NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: adversarial robutness, attack and defense, adversarial learning, graph neural networks
TL;DR: Benchmark for better evaluating adversarial robustness of graph neural networks
Abstract: Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks. Previous attacks and defenses on GNNs face common problems like scalability or generality, which hinder the progress of this domain. By rethinking limitations in previous works, we propose Graph Robustness Benchmark (GRB), the first benchmark that aims to provide scalable, general, unified, and reproducible evaluation on adversarial robustness of GNNs. GRB includes (1) scalable datasets processed by a novel splitting scheme; (2) diverse set of baseline methods covering GNNs, attacks, and defenses; (3) unified evaluation pipeline that permits a fair comparison; (4) modular coding framework that facilitates implementation of various methods and ensures reproducibility; (5) leaderboards that track the progress of the field. Besides, we propose two strong baseline defenses that significantly outperform previous ones. With extensive experiments, we can fairly compare all methods and investigate their pros and cons. GRB is open-source and maintains all datasets, codes, leaderboards at https://cogdl.ai/grb/home, which will be continuously updated to promote future research in this field.
Supplementary Material: zip
URL: Access to website: https://cogdl.ai/grb/home ; Access to codes: https://github.com/THUDM/grb
10 Replies

Loading