Stable Fair Graph Representation Learning with Lipschitz Constraint

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We address training instability in fair GNNs by introducing Lipschitz constraint and distributionally robust optimization, leading to more stable and reliable model performance.
Abstract: Group fairness based on adversarial training has gained significant attention on graph data, which was implemented by masking sensitive attributes to generate fair feature views. However, existing models suffer from training instability due to uncertainty of the generated masks and the trade-off between fairness and utility. In this work, we propose a stable fair Graph Neural Network (SFG) to maintain training stability while preserving accuracy and fairness performance. Specifically, we first theoretically derive a tight upper Lipschitz bound to control the stability of existing adversarial-based models and employ a stochastic projected subgradient algorithm to constrain the bound, which operates in a block-coordinate manner. Additionally, we construct the uncertainty set to train the model, which can prevent unstable training by dropping some overfitting nodes caused by chasing fairness. Extensive experiments conducted on three real-world datasets demonstrate that SFG is stable and outperforms other state-of-the-art adversarial-based methods in terms of both fairness and utility performance. Codes are available at https://github.com/sh-qiangchen/SFG.
Lay Summary: Ensuring fairness in graph neural networks (GNNs) is a critical yet challenging task. Existing methods often rely on masking sensitive attributes and using adversarial learning to promote fair representations. However, these methods frequently suffer from unstable training, which undermines their reliability and trustworthiness. In this work, we propose a new method called Stable Fair GNN (SFG), which aims to enhance training stability while preserving utility and fairness. We theoretically derive a tight Lipschitz bound of graph fair models with generator to constrain the weight fluctuations. Additionally, we introduce a distributionally robust optimization (DRO) framework that enhances the model’s resilience to mask variations in the fair representations, helping it avoid overfitting to fairness-related noise. Experiments on multiple real-world datasets demonstrate that SFG consistently preserving existing methods in both accuracy and fairness, while significantly reducing training instability. Our findings highlight that achieving trustworthy and effective graph learning requires not only improving fairness, but also explicitly controlling training stability.
Link To Code: https://github.com/sh-qiangchen/SFG
Primary Area: Deep Learning->Graph Neural Networks
Keywords: graph neural network, fairness, stability
Submission Number: 555
Loading