Towards More Effective and Transferable Poisoning Attacks against Link Prediction on Graphs

Published: 01 Jan 2024, Last Modified: 08 Aug 2024CSCWD 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the impressive performances achieved by graph representation learning models on tasks such as link prediction, their vulnerability to imperceptible adversarial perturbations has also come to light. However, adversarial attacks struggle to balance effectiveness and transferability, particularly when attempting to deliver effective attacks on various target models in one shot and achieving effective outcomes in both availability and integrity attack settings. This study explores a novel way to mitigate attack performance across various models under availability and integrity attack settings. To fulfill these objectives, we develop a Scoring & Update (SU) framework for performing adversarial attacks against link prediction on graphs. Specifically, we iteratively score perturbations through a parameter-frozen and transferable surrogate model and then update the perturbation with scoring feedback to learn a more effective and transferable adversarial perturbation. Extensive experiments on two real-world datasets show that our attack model is more effective than six attack baselines against six popular target models for graph representation under both availability and integrity attack settings. Code is available at https://github.com/anonymousaccept/STAA.
Loading