Abstract: Graph Neural Networks (GNNs) have achieved remarkable success across various domains, yet recent studies have exposed their vulnerability to backdoor attacks. Backdoor attacks inject triggers into the training set to poison the model, with adversaries typically relabeling training samples with backdoor triggers to a target label. This leads a GNN trained on the poisoned dataset to misclassify any test sample containing the backdoor trigger as the target label. However, relabeling not only increases the cost of the attack but also raises the risk of detection. Therefore, our study focuses on clean-label backdoor attacks, which do not require modify the labels of trigger-attached samples in the training phase. Specifically, we employ a novel method to select effective poisoned samples belonging to the target class. An adaptive trigger generator is furthest deployed to high attack success rates under a small backdoor budget. Our experiments on four public datasets validate the effectiveness of our proposed attack.
Loading