Endowing Pre-trained Graph Models with Provable Fairness

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: Graph Neural Networks, Fairness, Pre-trained Graph Models
Abstract: Pre-trained graph models (PGMs) have received considerable attention in graph machine learning by capturing transferable inherent structural properties and applying them to different downstream tasks. Similar to pre-trained language models, PGMs also inherit biases from human society, resulting in discriminatory behavior in downstream applications. However, the debiasing process of most existing methods is coupled with parameter optimization of GNN, making them not efficient to debias PGMs. Moreover, these debiasing methods lack a theoretical guarantee, i.e., provable lower bounds on the fairness of model predictions, which directly provides assurance in a practical scenario. To overcome these limitations, we propose a novel framework that endows pre-trained graph models with provable fairness (called GraphPAR). GraphPAR freezes the parameters of PGMs and applies a parameter-efficient adapter on node representations to make the model's predictions fairer. Specifically, we design a sensitive attribute augmenter that extends node representations with different sensitive attribute semantics for each node. Then employ two adversarial debiasing methods to optimize the adapter's parameters. Furthermore, based on the proposed framework GraphPAR, we quantify whether the fairness of each node is provable fairness, i.e., predictions are always fair within a certain range of sensitive attribute semantics. Experimental evaluations on real-world datasets demonstrate that GraphPAR achieves state-of-the-art performance and fairness on node classification task. Furthermore, based on our GraphPAR, around 90% nodes have provable fairness.
Track: Graph Algorithms and Learning for the Web
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: Yes
Submission Number: 2385
Loading