Counterfactual Fairness for Graph Neural Networks with Limited and Privacy Protected Sensitive Attributes

Published: 05 Sept 2024, Last Modified: 16 Oct 2024ACML 2024 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Keywords: fair graph neural network, counterfactual fairness, graph representation learning, privacy protection
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
Abstract: Graph Neural Networks (GNNs) have shown outstanding performance in learning graph representations, which increases their application in high-risk areas. However, GNNs may inherit biases from the graph data and make unfair predictions towards the protected sub-groups. To eliminate bias, a natural idea is to achieve counterfactual fairness from a causal perspective. Concretely, counterfactual fairness requires sufficient sensitive attributes as guidance, which is infeasible in the real world. The reason is that users with various privacy preferences may selectively publish their sensitive attributes and only limited sensitive attributes can be collected. Besides, the users who publish sensitive attributes still face privacy risks. In this paper, we first consider the situation in which the sensitive attributes are limited and propose a framework called PCFGR (Partially observed sensitive Attributes in Counterfactual Fair Graph Representation Learning) to learn fair graph representation from limited sensitive attributes. The framework trains a sensitive attribute estimator, which is applied to provide sufficient and accurate sensitive attributes. With these sensitive attributes, it can generate counterfactuals and eliminate the bias efficiently. Secondly, we aim to protect the privacy of the sensitive attributes and further propose PCFGR$\backslash$D. Specifically, PCFGR$\backslash$D first perturbs the sensitive attributes using Local Differential Privacy (LDP). Then it employs forward correction loss to train an accurate sensitive attributes estimator. We conduct extensive experiments and the experiment results show that it outperforms other alternatives in balancing utility and fairness.
A Signed Permission To Publish Form In Pdf: pdf
Primary Area: Trustworthy Machine Learning (accountability, explainability, transparency, causality, fairness, privacy, robustness, autoML, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 207
Loading