AuCoGNN: Enhancing Graph Fairness Learning Under Distribution Shifts With Automated Graph Generation

Published: 2025, Last Modified: 22 Jan 2026IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph neural networks (GNNs) have shown strong performance on graph-structured data but may inherit bias from training data, leading to discriminatory predictions based on sensitive attributes like gender and race. Existing fairness methods assume that training and testing data share the same distribution, but how fairness is affected under distribution shifts remains largely unexplored. To address this, we first identify theoretical factors that cause bias in graphs and explore how fairness is influenced by distribution shifts, particularly focusing on representation distances between groups in training and testing graphs. Based on this, we propose FatraGNN, which uses a graph generator to create biased graphs from different distributions and an alignment module to reduce representation distances for specific groups. This improves fairness and classification performance on unseen graphs. However, FatraGNN has limitations in generating realistic graphs and addressing group differentiation. To overcome these, we introduce AuCoGNN, which includes an automated graph generation module and a contrastive alignment mechanism. This ensures better fairness by maximizing the representation distance between the same certain groups while minimizing the representation distance between different groups. Experiments on real-world and semi-synthetic datasets demonstrate the effectiveness of both models in improving fairness and accuracy.
Loading