Networked Inequality: Preferential Attachment Bias in Graph Neural Network Link Prediction

Published: 28 Oct 2023, Last Modified: 21 Dec 2023NeurIPS 2023 GLFrontiers Workshop OralEveryoneRevisionsBibTeX
Keywords: graph learning, fairness, link prediction
TL;DR: We show that Graph Convolutional Networks can have a preferential attachment bias in link prediction. We analyze how this bias can amplify degree and power imbalances in networks, and propose a training-time strategy to alleviate this unfairness.
Abstract: Graph neural network (GNN) link prediction is increasingly deployed in citation, collaboration, and online social networks to recommend academic literature, collaborators, and friends. While prior research has investigated the dyadic fairness of GNN link prediction, the within-group fairness and ``rich get richer'' dynamics of link prediction remain underexplored. However, these aspects have significant consequences for degree and power imbalances in networks. In this paper, we shed light on how degree bias in networks affects Graph Convolutional Network (GCN) link prediction. In particular, we theoretically uncover that GCNs with a symmetric normalized graph filter have a within-group preferential attachment bias. We validate our theoretical analysis on real-world citation, collaboration, and online social networks. We further bridge GCN's preferential attachment bias with unfairness in link prediction and propose a new within-group fairness metric. This metric quantifies disparities in link prediction scores between social groups, towards combating the amplification of degree and power disparities. Finally, we propose a simple training-time strategy to alleviate within-group unfairness, and we show that it is effective on citation, online social, and credit networks.
Submission Number: 50
Loading