Fast&Fair: Training Acceleration and Bias Mitigation for GNNs

Published: 10 May 2023, Last Modified: 10 May 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Graph neural networks (GNNs) have been demonstrated to achieve state-of-the-art performance for a number of graph-based learning tasks, which leads to a rise in their employment in various domains. However, it has been shown that GNNs may inherit and even amplify bias within training data, which leads to unfair results towards certain sensitive groups. Meanwhile, training of GNNs introduces additional challenges, such as slow convergence and possible instability. Faced with these limitations, this work proposes FairNorm, a unified normalization-based framework that reduces the bias in GNN-based learning while also providing provably faster convergence. Specifically, FairNorm presents individual normalization operators over different sensitive groups and introduces fairness regularizers on the learnable parameters of normalization layers to reduce the bias in GNNs. The design of the proposed regularizers is built upon analyses that illuminate the sources of bias in graph-based learning. Experiments on node classification over real-world networks demonstrate the efficiency of the proposed scheme in improving fairness in terms of statistical parity and equal opportunity compared to fairness-aware baselines. In addition, it is empirically shown that the proposed framework leads to faster convergence compared to the naive baseline where no normalization is employed.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: zip
Assigned Action Editor: ~Charles_Xu1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 715