Disentangled Counterfactual Graph Augmentation Framework for Fair Graph Learning with Information Bottleneck

Published: 2024, Last Modified: 12 Feb 2026ECML/PKDD (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph Neural Networks (GNNs) are susceptible to inheriting and even amplifying biases within datasets, subsequently leading to discriminatory decision-making. Our empirical observation reveals that the inconsistent distribution of sensitive attributes conditioned on labels significantly contributes to unfairness. To mitigate this problem, we suggest rectifying this inconsistency of the original dataset through a counterfactual augmentation strategy. Existing methods usually generate counterfactual samples from an entangled representation space, which fail to distinguish the different dependencies on sensitive attributes. Thus, we propose a novel disentangled counterfactual graph augmentation method based on the Information Bottleneck theory, named Fair Disentangled Graph Information Bottleneck (FDGIB). Specifically, FDGIB embeds graphs into two disentangled representation spaces: sensitive-related and sensitive-independent. By satisfying three conditions, FDGIB theoretically guarantees the disentanglement of different sensitive dependencies. We acquire credible counterfactual augmented graphs to facilitate consistency in data distribution and generate fair representations. FDGIB serves as a plug-and-play preprocessing framework that can collaborate with any GNNs. We validate the effectiveness of our model in promoting fairness learning through extensive experiments. Our source code is available at https://github.com/Evanlyf/FDGIB.
Loading