FATE: Fairness Attacks on Graph Learning

11 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: graph learning, fairness, adversarial attacks
TL;DR: We propose a general meta-learning poisoning attack strategy to exacerbate unfairness of graph learning models, while maintaining the utility in downstream tasks.
Abstract: We study fairness attacks on graph learning to answer the following question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias? We answer this question via a bi-level optimization problem and propose a meta learning-based attacking framework named FATE. The proposed framework is broadly applicable with respect to various fairness definitions and graph learning models, as well as arbitrary choices of manipulation operations. We further instantiate FATE to attack statistical parity and individual fairness on graph neural networks. We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification. The experimental results demonstrate that FATE could amplify the bias of graph neural networks with or without fairness consideration while maintaining the utility on the downstream task. We hope this paper provides insights into the adversarial robustness of fair graph learning and can shed light on designing robust and fair graph learning in future studies.
Supplementary Material: pdf
Submission Number: 10447
Loading