Keywords: Graph Transformer, GNN, adversarial robustness, adversarial attack, adaptive attack, graph positional encodings, continuous relaxation
TL;DR: We propose continuous relaxations for graph transformer models that enable the application of gradient-based graph structure attacks.
Abstract: Existing studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks.
Even though Graph Transformers (GTs) surpassed Message-Passing GNNs on several benchmarks, their adversarial robustness properties are unexplored. However, attacking GTs is challenging due to their Positional Encodings (PEs) and special attention mechanisms which can be difficult to differentiate.
We overcome these challenges targeting three representative architectures based on (1) random-walk PEs, (2) pair-wise-shortest-path PEs, and (3) spectral PEs -- and propose the first adaptive attacks for GTs.
We leverage our attacks to evaluate robustness to (a) structure perturbations on node classification; and (b) node injection attacks for (fake-news) graph classification.
Our evaluation reveals that they can be catastrophically fragile and underlines our work's importance and the necessity for adaptive attacks.
Submission Number: 17
Loading