Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine LearningDownload PDF

Published: 11 Oct 2021, Last Modified: 23 May 2023NeurIPS 2021 Datasets and Benchmarks Track (Round 2)Readers: Everyone
Keywords: adversarial robustness, graph machine learning, benchmark
TL;DR: Benchmark for better evaluating the adversarial robustness of graph machine learning models
Abstract: Adversarial attacks on graphs have posed a major threat to the robustness of graph machine learning (GML) models. Naturally, there is an ever-escalating arms race between attackers and defenders. However, the strategies behind both sides are often not fairly compared under the same and realistic conditions. To bridge this gap, we present the Graph Robustness Benchmark (GRB) with the goal of providing a scalable, unified, modular, and reproducible evaluation for the adversarial robustness of GML models. GRB standardizes the process of attacks and defenses by 1) developing scalable and diverse datasets, 2) modularizing the attack and defense implementations, and 3) unifying the evaluation protocol in refined scenarios. By leveraging the modular GRB pipeline, the end-users can focus on the development of robust GML models with automated data processing and experimental evaluations. To support open and reproducible research on graph adversarial learning, GRB also hosts public leaderboards for different scenarios. As a starting point, we provide various baseline experiments to benchmark the state-of-the-art techniques. GRB is an open-source benchmark and all datasets, code, and leaderboards are available at https://cogdl.ai/grb/home.
URL: Homepage: https://cogdl.ai/grb/home ; Code: https://github.com/THUDM/grb
Supplementary Material: pdf
Contribution Process Agreement: Yes
Dataset Url: https://cogdl.ai/grb/home
License: MIT License
Author Statement: Yes
13 Replies

Loading