Topology Matters in Fair Graph Learning: a Theoretical Pilot Study

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Topology; Fair Graph Learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Recent advances in fair graph learning observe that graph neural networks (GNNs) further amplify prediction bias compared with multilayer perception (MLP), while the reason behind this is unknown. In this paper, \reviseb{as a pilot study, we provide a theoretical understanding of when and why the bias enhancement happens in GCN-like aggregation within contexture stochastic block model (CSBM)\footnote{We leave more general analysis on other aggregation operations and random graph models in future work. See Appendix~\ref{app:future} for more discussions.}. Specifically, we define bias enhancement as higher node representation bias after aggregation compared with that before aggregation. We provide a sufficient condition related to the statistical information of graph data to provably and comprehensively delineate instances of bias enhancement during aggregation. Additionally, the proposed sufficient condition goes beyond intuition, serving as a valuable tool to derive data-centric insights. Motivated by data-centric insights,} we propose a fair graph rewiring algorithm, named \textit{FairGR}, to reduce sensitive homophily coefficient while preserving useful graph topology. Experiments on node classification tasks demonstrate that \textit{FairGR} can mitigate the prediction bias with comparable performance on three real-world datasets. Additionally, \textit{FairGR} is compatible with many state-of-the-art methods, such as adding regularization, adversarial debiasing, and Fair mixup via refining graph topology, i.e., \textit{FairGR} is a plug-in fairness method and can be adapted to improve existing fair graph learning strategies. The code is available in {\color{blue}\url{https://anonymous.4open.science/r/FairGR-B65D/}}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4075
Loading