Abstract: Graph neural networks (GNN) have recently been integrated into knowledge graph representation learning. The efficient message-passing functions in GNNs capture latent relationships between entities within these semantic networks, which aids in various downstream tasks such as link prediction, node classification, and entity alignment. However, there is a general deficiency in representation learning on graphs with loops (cycles) and self-loops<sup>1</sup>. Traditional message-passing functions induce biased learning on knowledge graphs, leading to skewed predictions. This work presents a detailed analysis of representation bias generated by these functions on knowledge graphs containing short and self-loops. We demonstrate the variance in performance on knowledge graphs with varying topology over two downstream: link prediction and entity alignment. The experiments show that the representations from popular learning algorithms are prone to capturing biases in the graphs’ structures. These biases, however, have different effects on the formulated downstream tasks, motivating research in the domain of topology-invariant representation algorithms for knowledge graphs.
Loading