Graph Adversarial Networks: Protecting Information against Adversarial AttacksDownload PDF

Sep 28, 2020 (edited Mar 05, 2021)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
  • Reviewed Version (pdf): https://openreview.net/references/pdf?id=N_bIBus1pT
  • Keywords: graph neural networks, deep learning, adversarial learning, theory
  • Abstract: We study the problem of protecting information when learning with graph-structured data. While the advent of Graph Neural Networks (GNNs) has greatly improved node and graph representational learning in many applications, the neighborhood aggregation paradigm exposes additional vulnerabilities to attackers seeking to extract node-level information about sensitive attributes. To counter this, we propose a minimax game between the desired GNN encoder and the worst-case attacker. The resulting adversarial training creates a strong defense against inference attacks, while only suffering small loss in task performance. We analyze the effectiveness of our framework against a worst-case adversary, and characterize the trade-off between predictive accuracy and adversarial defense. Experiments across multiple datasets from recommender systems, knowledge graphs and quantum chemistry demonstrate that the proposed approach provides a robust defense across various graph structures and tasks, while producing competitive GNN encoders.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
15 Replies

Loading