Keywords: Graph Neural Networks, Fairness, Node classification
TL;DR: We study the relationship between fairness and homophily using CSBM-S, and propose FairEST, a fairness-aware GNN.
Abstract: Graph Neural Networks (GNNs) can propagate sensitive signals via message passing, especially on homophilous graphs where edges preferentially connect nodes sharing sensitive attributes. We revisit fairness through the lens of homophily using CSBM-S, a synthetic model that independently controls label homophily $h_y$ and sensitive homophily $h_s$, enabling precise and repeatable evaluation of group fairness. CSBM-S reveals two key observations: (i) group disparity peaks when $h_y \approx 0.5$; and (ii) bias consistently diminishes as $h_s \rightarrow 0.5$. Guided by these insights, we propose FairEST, which enforces $h_s \approx 0.5$ by flipping the sensitive attribute and its most correlated features during training. Across diverse benchmarks and backbones, FairEST attains the lowest bias on most encoder-dataset pairs with comparable accuracy, yielding average absolute reductions of 1.63% ($\Delta \mathrm{SP}$) and 1.28% ($\Delta \mathrm{EO}$) over the prior state-of-the-art. Together, CSBM-S and FairEST provide a homophily-centric toolkit for analyzing and mitigating bias in graph representation learning.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 10902
Loading