Adversarial Attacks on Node EmbeddingsDownload PDF

27 Sept 2018 (modified: 22 Oct 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models, and are successful even when the attacker is restricted.
Keywords: node embeddings, adversarial attacks
TL;DR: Adversarial attacks on unsupervised node embeddings based on eigenvalue perturbation theory.
Data: [Citeseer](https://paperswithcode.com/dataset/citeseer), [Cora](https://paperswithcode.com/dataset/cora)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1809.01093/code)
7 Replies

Loading