Fair Graph Message Passing with TransparencyDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Fairness, Transparency, Graph Neural Networks
TL;DR: We aim to achieve fair message passsing with transparency to explictly use sensitive attributes in forward progagation instead of backward propagation..
Abstract: Recent advanced works achieve fair representations and predictions through regularization, adversarial debiasing, and contrastive learning in graph neural networks (GNNs). These methods \textit{implicitly} encode the sensitive attribute information in the well-trained model weight via \textit{backward propagation}. In practice, we not only pursue a fair machine learning model but also lend such fairness perception to the public. For current fairness methods, how the sensitive attribute information usage makes the model achieve fair prediction still remains a black box. In this work, we first propose the concept \textit{transparency} to describe \textit{whether} the model embraces the ability of lending fairness perception to the public \textit{or not}. Motivated by the fact that current fairness models lack of transparency, we aim to pursue a fair machine learning model with transparency via \textit{explicitly} rendering sensitive attribute usage for fair prediction in \textit{forward propagation} . Specifically, we develop an effective and transparent \textsf{F}air \textsf{M}essage \textsf{P}assing (FMP) scheme adopting sensitive attribute information in forward propagation. In this way, FMP explicitly uncovers how sensitive attributes influence final prediction. Additionally, FMP scheme can aggregate useful information from neighbors and mitigate bias in a unified framework to simultaneously achieve graph smoothness and fairness objectives. An acceleration approach is also adopted to improve the efficiency of FMP. Experiments on node classification tasks demonstrate that the proposed FMP outperforms the state-of-the-art baselines in terms of fairness and accuracy on three real-world datasets. The code is available in {\color{blue}\url{https://anonymous.4open.science/r/FMP-AD84}}.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
20 Replies

Loading