Abstract: Graph neural networks (GNNs) have shown remarkable success in various domains. Nonetheless, studies have shown that GNNs may inherit and amplify societal bias, which critically hinders their application in high-stakes scenarios. Although efforts have been exerted to enhance the fairness of GNNs, most of them rely on the statistical fairness notion, which assumes that biases arise solely from sensitive attributes, neglecting the pervasive issue of labeling bias prevalent in real-world scenarios. To this end, recent works extend counterfactual fairness in graph data to address label bias, but they neglect the graph structure bias, where nodes sharing sensitive attributes tend to connect more closely. To bridge these gaps, we propose a novel GNN framework, Fair Disentangled GNN (FDGNN), designed to mitigate multi-sources biases to enhance the fairness of GNNs while preserving task-related information via fair node representation learning. Specifically, FDGNN initiates by mitigating graph structure bias by ensuring consistent representation of different subgroups. Subsequently, to achieve fair node representation, identified counterfactual instances are utilized as guides for disentangling a node’s representation and eliminating sensitive attribute-related information via a de-identifiable sensitive attribute mechanism. Extensive experiments on multiple real-world graph datasets demonstrate the superiority of FDGNN in graph fairness compared to other state-of-the-art methods while achieving comparable utility performance.
Loading