Track: Graph algorithms and modeling for the Web
Keywords: fairness, prompt tuning, graph neural networks
Abstract: Graph prompt tuning has achieved significant success for its ability to effectively adapt pre-trained graph neural networks to various downstream tasks. However, the pre-trained models may learn discriminatory representation due to the inherent prejudice in graph-structured data. Existing graph prompt tuning overlooks such unfairness, leading to biased outputs towards certain demographic groups determined by sensitive attributes such as gender, age, and political ideology. To overcome this limitation, we propose a fairness-aware graph prompt tuning method to promote fairness while enhancing the generality of any pre-trained GNNs (named FPrompt). FPrompt introduces hybrid graph prompts to augment counterfactual data while aligning the pre-training and downstream tasks. It also applies edge modification to increase sensitivity heterophily. We provide a two-fold theoretical analysis: first, we demonstrate that FPrompt possesses universal capabilities in handling pre-trained GNN models across various pre-training strategies, ensuring its adaptability in different scenarios. Second, we show that FPrompt effectively reduces the upper bound of generalized statistical parity, thereby mitigating the bias of pre-trained models. Extensive experiments demonstrate that FPrompt outperforms baseline models in both accuracy and fairness (~$33$\%) on benchmark datasets. Additionally, we introduce a new benchmark for transferable evaluation, showing that FPrompt achieves state-of-the-art generalization performance.
Submission Number: 2856
Loading